System and method for evaluating network threats and usage

Information

  • Patent Grant
  • 10230746
  • Patent Number
    10,230,746
  • Date Filed
    Monday, January 30, 2017
    8 years ago
  • Date Issued
    Tuesday, March 12, 2019
    5 years ago
Abstract
Systems and methods are presented for generating a threat score and a usage score of each of a plurality of IP addresses. The threat score may be determined based on quantity of occurrences and recency of each occurrence of an IP address in network alert datasets, in addition to a weighting factor for each data source indicating the accuracy of the data source.
Description
TECHNICAL FIELD

The present disclosure relates to systems and techniques for generating scores representing the threat reputation and usage of respective IP addresses.


BACKGROUND

Traditional IP address blacklists and whitelists have to be updated periodically and contain many false positives. Traditional methods of classifying an IP address as a threat can mistakenly classify IP addresses of employees and authorized users as threats.


SUMMARY

There is a need to generate threat reputation scores and usage scores of IP addresses based on reliability of data sources, passage of time, membership in various data sources, and/or amount of threats or uses. There is also a need to understand both the network threat potential and possible trusted affiliation of an IP address at the same time.


In accordance with one aspect, a computer system comprises one or more computer processors and a tangible storage device storing one or more modules configured for execution by the one or more computer processors in order to cause the computer system to: determine an IP address for which a threat score is to be determined; access data sources from each of one or more data sources, the data sources comprising: a plurality of recorded network threat events, date and time of each of the plurality of recorded network threat events, an originating IP address for each of the plurality of recorded network threat events, and/or an event type of each of the plurality of recorded network threat events; determine which of the data sources includes one or more occurrences of the IP address, wherein each occurrence indicates a threat by the IP address; for each of the data sources for which the IP address is a member of the corresponding data source: determine a quantity of occurrences of the IP address in the data source; determine a recency of each occurrence of the IP address in the data source, wherein recency is determined based on an amount of time between respective occurrences and a current time; determine a weighting factor for each of the data sources indicating expected accuracy of respective occurrences indicated in the data source of the data source; and determine the threat score for the IP address based at least on the determined quantity of occurrences, the recency of occurrences, and the weighting factor for each of the data sources.


In accordance with another aspect, one or more computer processors and a tangible storage device storing one or more modules configured for execution by the one or more computer processors in order to cause the computer system to: determine an IP address for which a usage score is to be determined; access network usage datasets from each of one or more data sources, the network usage datasets comprising: a plurality of recorded network usage events, date and time of each of the plurality of recorded network usage events, an originating IP address for each of the plurality of recorded network usage events, and/or an event type of each of the plurality of recorded network usage events; determine which of the network usage datasets includes one or more occurrences of the IP address, wherein each occurrence indicates a usage by the IP address; for each of the data sources for which the IP address is a member of the corresponding network usage dataset: determine a quantity of occurrences of the IP address in the data source; determine a recency of each occurrence of the IP address in the network usage dataset, wherein recency is determined based on an amount of time between date and time of respective occurrences and a current time; determine a weighting factor for each of the data sources indicating authority of each of the data sources; and determine an usage score for the IP address based at least on the determined quantity of occurrences, the recency of occurrences, and the weighting factor for each of the data sources.


In accordance with another aspect, a non-transitory computer-readable storage medium storing computer-executable instructions configured to direct a computing system to: determine an IP address for which a threat score is to be determined; access data sources from each of one or more data sources, the data sources comprising: a plurality of recorded network threat events, date and time of each of the plurality of recorded network threat events, an originating IP address for each of the plurality of recorded network threat events, and/or an event type of each of the plurality of recorded network threat events; determine which of the data sources includes one or more occurrences of the IP address, wherein each occurrence indicates a threat by the IP address; for each of the data sources for which the IP address is a member of the corresponding data source: determine a quantity of occurrences of the IP address in the data source; determine a recency of each occurrence of the IP address in the data source, wherein recency is determined based on an amount of time between respective occurrences and a current time; determine a weighting factor for each of the data sources indicating expected accuracy of respective occurrences indicated in the data source of the data source; and determine the threat score for the IP address based at least on the determined quantity of occurrences, the recency of occurrences, and the weighting factor for each of the data sources.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 illustrates one embodiment of an IP reputation system, various data sources, modules, and data flow in the system.



FIG. 2 illustrates an embodiment of system illustrating various data sources and information collected from the various data sources.



FIG. 3 illustrates an embodiment of the IP reputation system and factors considered by the system in generating threat reputation scores and usage scores.



FIG. 4 illustrates three stages of generating threat reputation scores and usage scores using the IP reputation system.



FIG. 5 is a flowchart depicting an illustrative process of determining weightings of data sources.



FIG. 6 is a flowchart depicting an illustrative process of calculating a threat reputation score for an IP address.



FIG. 7 is a flowchart depicting an illustrative process of calculating a usage score for an IP address.



FIG. 8 is a two-dimensional heat map illustrating threat reputation scores and usage scores of IP addresses.



FIG. 9 is a block diagram illustrating one embodiment of a computer system with which certain methods and modules discussed herein may be implemented.





DETAILED DESCRIPTION OF CERTAIN EMBODIMENTS
Definitions

In order to facilitate an understanding of the systems and methods discussed herein, a number of terms are defined below. The terms defined below, as well as other terms used herein, should be construed to include the provided definitions, the ordinary and customary meaning of the terms, and/or any other implied meaning for the respective terms. Thus, the definitions below do not limit the meaning of these terms, but only provide exemplary definitions.


Ontology: Stored information that provides a data model for storage of data in one or more databases. For example, the stored data may comprise definitions for object types and property types for data in a database, and how objects and properties may be related.


Database: A broad term for any data structure for storing and/or organizing data, including, but not limited to, relational databases (Oracle database, mySQL database, etc.), spreadsheets, XML files, and text file, among others. It is also called a data store or a data structure herein.


Data Object or Object: A data container for information representing specific things in the world that have a number of definable properties. For example, a data object can represent an entity such as a person, a place, an organization, a market instrument, or other noun. A data object can represent an event that happens at a point in time or for a certain duration. A data object can represent a document or other unstructured data source such as an e-mail message, a news report, or a written paper or article. Each data object may be associated with a unique identifier that uniquely identifies the data object. The object's attributes (e.g. metadata about the object) may be represented in one or more properties.


Object Type: Type of a data object (e.g., person, event, or document). Object types may be defined by an ontology and may be modified or updated to include additional object types. An object definition (e.g., in an ontology) may include how the object is related to other objects, such as being a sub-object type of another object type (e.g. an agent may be a sub-object type of a person object type), and the properties the object type may have.


Properties: Attributes of a data object that represent individual data items. At a minimum, each property of a data object has a property type and a value or values.


Property Type: The type of data a property is, such as a string, an integer, or a double. Property types may include complex property types, such as a series data values associated with timed ticks (e.g. a time series), etc.


Property Value: The value associated with a property, which is of the type indicated in the property type associated with the property. A property may have multiple values.


Link: A connection between two data objects, based on, for example, a relationship, an event, and/or matching properties. Links may be directional, such as one representing a payment from person A to B, or bidirectional.


Link Set: Set of multiple links that are shared between two or more data objects.


Threat Reputation Score (Threat Score): A score that represents the maliciousness of an IP address. It can be a probability of an IP address being involved in an actual network security threat based on historical network security data. The score may also be called an “threat score,” and/or a “risk score.”


Usage Score: A score that represent a likelihood that an IP address is trusted and, therefore, is not involved in threat activities associated with an entity. For example, a usage score may indicate how actively an IP address is used by a trusted user, such as a customer, an employee, or an authorized user of an entity, as opposed to untrusted and/or unauthorized users of the entity's computing network. It is also called “customer and employee usage score.”


IP Reputation System



FIG. 1 illustrates one embodiment of an IP reputation system, various data sources, modules, and data flow in the system. The system 100 includes multiple data sources, including data sources 102, 104, 106, 108, which represented different example data source types. In particular, data source 102 represents an intrusion detection system, which may include a device or application that monitors network or system activities for malicious activities or policy violations, and reports such activities to a management device or system. Data source 104 represents a firewall, which may include a device-based or application-based network security system that controls incoming and/or outgoing network traffic by analyzing data packets and determining whether the traffic should be allowed through or not, based on an applied rule set. Data source 106 represents a proxy server, which may include a computing system or an application that acts as an intermediary for requests from clients seeking resources from other computing resources. Data source 106 may include a web proxy, a database proxy, reverse proxy, and so forth. Data source 108 represents a Virtual Private Network (VPN), which may enable a computing device to send and receive data across shared or public networks as if the computing device were directly connected to the private network.


Other types of data sources, such as mobile computing devices, game servers, and so forth, may also provide input data regarding network security events. For example, a mobile device may act as a hotspot for other devices. The hotspot application installed on the mobile device may maintain a log of potential threats and also users, accounts, and/or devices that are authorized to use the hotspot. Other types of data sources not explicitly mentioned may also be used.


Data sources such as depicted in FIG. 1 may maintain logs of network traffic, including IP addresses of various computing devices that are connected to and/or request resources from the data sources. For example, a VPN usually is associated with a VPN log. The VPN log allows administrators, users, and network security analysts to determine the IP address, entities, locations, and so forth, of the computing devices that have been connected to the VPN. Similarly, a firewall log reveals a lot of information about security threat attempts at a network and also the nature of the traffic coming in and going out of the firewall. Logs from an intrusion detection system 102, a proxy server 106, and so forth usually also includes information regarding historic connection information of network traffic.


In some embodiments, some data sources, such as an intrusion detection system 102, may also maintain “black lists,” which include IP addresses that the data sources deem dangerous. Some data sources publish and share such black lists periodically with the public. Some data sources maintain proprietary black lists shared only internally within an organization. Some software providers have black lists that may be included with purchase of proprietary network security software. There are also websites which allow users to check whether an IP address is included in one or many of such black lists maintained by various sources.


In some embodiments, some data sources, such as VPN 108, may include a database of trusted users, a trusted user table, or a list of authorized users or user computing devices. This may also be referred to as a “white list” or a “trusted list.” For example, a VPN server may maintain one or more data tables of users who are authorized to log in to the VPN server and connect to a private network. Membership in a “white list” usually means that the user is a trusted user, an employee of an organization, or someone authorized to access a private network.


In some embodiments, various computing devices and/or users may be designated as safe so that communications with those safe computing devices are not erroneously designated as potentially dangerous. For example, in a company that tests SPAM email detection software, a testing computer that sends out SPAM emails on a regular basis may be marked as a safe computer and given access to various network resources.


The system 100 also includes a network 110, users 120 that are connected to the network 110, and administrators 130 who are connected to the network 110. The system 100 includes an IP reputation system 150, which is in communication with one or more of the data sources and provides IP reputation data to users, among other functions that are discussed herein.


Depending on the embodiment, the IP reputation system 150 may include a threat reputation score module 152, a usage score module 154, a weighting module 156, and a reporting module 158, discussed further below. The reputation system 150 may also include a data store 160. In some embodiments, the data store 160 may be located remotely from the IP reputation system 150. The IP reputation system 150 and its various modules may receive input data from data sources 102, 104, 106, 108, and other types of sources of network traffic and security data.


In general, the IP reputation system 150 accesses data at multiple data sources in order to assess characteristics of particular IP addresses. The weighting module 156 may generate weights for respective data sources based on historic accuracy of network threat reports. Depending on the embodiments, the more accurate a data source is in terms of successful past threat alerts, the more weight is assigned to incidents reported by that data source. Various methods for generating weights for respective data sources are further discussed below.


In some embodiments, the threat reputation module 152 may use network security information from the data sources 102, 104, 106, and 108 (including network threats, time, location, IP address, and so forth), weights generated by the weighting module 156, and/or additional information such as an IP address's membership in a “blacklist” or a “watch list” in a data source, to generate threat reputation scores for individual IP addresses and/or groups of IP addresses. Various methods for generating the threat reputation scores are further discussed below. Depending on the embodiment, network threats may include various suspicious, unwanted, and/or illegal activities. For example, network threats may include network attacks (e.g., denial of service attacks) and/or threats (e.g., activities that don't rise to the level of an attack, but are suspicious, unwanted, and/or illegal).


In some embodiments, the usage score module 154 may use network security information from data sources 102, 104, 106, and 108 (including network attacks, time, location, IP address, and so forth), weights generated by the weighting module 156, and/or additional information such as an IP address's membership in a trusted employee list or inclusion in a trusted user list in a data source, to generate usage scores for individual IP addresses and/or groups of IP addresses. Various methods for generating the usage scores are further discussed below.


Depending on the embodiment, the reporting module 158 may generate a user interface, a heat map, web site, or some other kinds of representation of the scores generated by the threat reputation score module 152 and/or the usage score module 154. The reporting module 158 may also send scores to the users 120 and/or administrators 130 directly in a summarized report, identifying potentially important IP addresses that the administrators 130 or the users 120 should pay special attention to. Further details regarding the reporting module 158 are discussed below.



FIG. 2 illustrates two data sources 250 and 255, and information collected from those various data sources. As shown in FIG. 2, a server computer 225, a mobile phone/computing device 230, and a personal computer 235 are all connected to the network server 250 via network 203. The network server 250 may be an email server, web server, database server, print server, file server, authentication server, or a computing node acting as a peer-to-peer server (P2P sever), and so forth. In this example, a computing node 240 and a laptop computer 245 are connected to the network access device 255 via network 205. The network access device 255 may be a router, switch, network bridge, and so forth.


In some embodiments, the network server 250 and network access device 255 each maintain a log of historic network security events that are believed to be potentially noteworthy. For example, the network server 250 may maintain a log of suspicious activities as shown in table 210. Depending on the embodiment, the log may include information such as host IP address, date and time of the hit (event), name of the Internet Service Provider (ISP) if known, and the type of event. In the example as shown in FIG. 2, the types of event include spyware, peer-to-peer (P2P), advertising, malicious attack, suspicious and/or illegal activities, and some unknown activities that could be potentially suspicious or dangerous. Depending on the embodiment, other types of threats or suspicious activities may also be included, such as sending SPAM emails, too many failed authentication requests, and so forth. Each activity, its originating IP address, date and time, and ISP information may also be included in the log. As shown in FIG. 2, the network access device 255 also maintains a log of suspicious activities as shown in table 220 in a format that is similar to table 210. Depending on the embodiment, the security events may be stored in any format and include additional and/or less information than is shown in the example tables 210 and 220.


Analyzing information stored in table 210 and table 220 can be difficult for several reasons. First, there can be many false alerts. For example, if a trusted user who has VPN or other types of access to the network server 250 has forgotten his or her password, and tried unsuccessfully to log onto the network server 250 frequently in a short period of time, this could be seen as a potential security threat and recorded in the log. A user or an administrator cannot easily tell that the IP address of the trusted user is not initiating an attack or an otherwise true alert-worthy activity. Second, a busy server of a network access device may receive a huge number of visit or resource requests per second. Therefore, the logs can be much longer than the tables 210 and 220. It is virtually impossible for humans to analyze such data. It is also slow and inefficient to spot false alerts using traditional programs that monitor such activities because traditional programs maintain a list of suspicious IP addresses or computing device identities.



FIG. 3 is conceptual block diagram illustrating example factors that may be considered by the system in generating threat reputation scores and usage scores, such as for a particular IP address 310. Depending on the embodiment, the IP reputation system 150 may consider data from various other data sources in determining attributes of an IP address.


As indicated in FIG. 3, membership 320 in public and/or private blacklists and trusted user lists (e.g., whitelists, such as authorized VPN users) may be considered in determining scores or other characteristics of the IP address 320.


The IP reputation system 150 may consider recency 322 of the suspicious events. Generally, the more recent a suspicious event is, the more probable that it is indicative of the risk potential of the IP address 310. Similarly, the more recent a trusted event (such as an authorized device logging into a VPN), the more probative it is regarding the trustworthiness of the IP address 310.


The IP reputation system 150 may also consider the quantity 324 of suspicious events or trusted events originating from the IP address 310. Generally, the more suspicious activities that an IP address 310 is involved in, the more likely that the IP address 310 may pose a security threat. Similarly, the more trusted events an IP address 310 is involved in, the more likely that the IP address 310 is an IP address that is used by an employee or an otherwise authorized/trusted user.


In addition, the IP reputation system 150 may also consider the severity 328 of suspicious events originating from the IP address 310. Depending on the embodiment, potentially suspicious events may be categorized according to various standards and/or conventions. For example, a malicious attack may more serious than advertising. However, the level of severity may also be adjusted or customized based on different organizational needs. For example, an organization may want to identify IP addresses that are associated with disseminating copyrighted materials online. Accordingly, the IP reputation system 150 may set the severity 328 of P2P events and potential sharing of large files higher than normal.


Moreover, severity 328 may also be affected by origin of the IP address. For example, if the IP address is from a known notorious source of hacking activities, then the severity used for calculating the threat reputation score may be higher than normal even for suspicious events of the same type.


The risk assessment 340 for the particular IP address may include both a usage score (also called a good score) and a threat reputation score (also called a threat score or a bad score). The risk assessment 340 may be provided to an entity in various formats, such as via one or more user interfaces that display usage scores, threat scores, and/or other information regarding particular IP addresses (e.g., see the example user interfaces of FIG. 4). The usage score may represent how trustworthy the IP address is. For example, if the usage score is based on the membership information 320, such as a list of trusted and/or authorized users and their device information, and the IP address 310 is associated with an employee's device (e.g., employee's cellphone) in the membership information 320, the usage score for the IP address 310 may be relatively high. In another example, if the IP address 310 has been involved in multiple actual threats based on quantity 324 and actual threat data 326, it is more likely that the IP address 310 may have a higher threat score. In some embodiments, the recency 322 and severity 328 also play important roles in determining the threat score.



FIG. 4 is a flow diagram illustrating various types of IP scoring/ratings that may be generated by the IP reputation system. The example embodiment 400 includes a list of IP addresses 410, which may also include additional information such as event type, time, originating location, settings, severity, type, and so forth, regarding respective IP addresses. During the first stage of generating the scores, the list of IP address 410 may be gathered from a variety of data sources. As discussed, the data sources may include all kinds of computing, networking, and/or mobile devices.


During the second stage of generating the scores as shown in FIG. 4, the IP reputation system 150 analyzes the IP address usage data 410, which may include various types of data, such as those illustrated in FIG. 3. In some embodiments, the IP address usage data 410 can include IP addresses, activities associated with the IP addresses, connection types, date, and time, and so forth. Because a given IP address may appear in multiple different threat data sources and each occurrence can be considered, in order to generate the scores of a given IP addresses, input data from across a plurality of data sources may be considered.


During the third stage of generating the scores, the IP reputation system 150 may generate and present scores, ratings, and/or other summary data related to the IP addresses, to various users. The scores may be presented to users in various formats via the reporting module 158. For example, a table 420 may be generated and presented to a user or an administrator. The table 420, as shown, includes four IP addresses and their respective pair of scores—a threat score and a usage score for each IP address. This format allows a user or an administrator to easily identify interesting targets for further investigation. For example, the IP address “58.58.23.145” has both a high threat reputation score (0.78) and a relatively high usage score (0.9). The high threat reputation score may be based largely on the fact that the IP address frequently appears in an intrusion detection system, while the high usage score may indicate that the IP address is used by someone with trusted access, such as an employee who regularly connects to the network with VPN. In this example, it may be unwise to simply blacklist this IP address and prohibit it from connecting in the future.


The table 430, as shown, includes additional example representations of risk levels. In this example, a “risk” representation has taken both the threat reputation score and the usage score into consideration already. For example, the IP address “58.58.23.145” has both a high threat reputation score (0.78) and a relatively high usage score (0.9). Therefore, in terms of risk, it is shown in table 430 as only having one “bomb” associated with it—less than the risk rating of two bombs given to IP address “133.109.7.42,” which is associated with a lower usage score of 0.03, which may indicate that the moderate threat score of 0.66 is not mitigated by appropriate usage data associated with the IP address. Depending on the embodiment, other graphical indicators may be provided (e.g., rather than the bombs shown in example table 430), and various algorithms may be used in interpreting threat usage and/or usage scores in order to determine graphical representations.


Depending on the embodiment, the IP reputation system 150 may also generate threat reputation scores and usage scores and list them in data structures, such as example tables 440 and 450, as shown in FIG. 4. A user or an administrator may sort the scores and identify the IP addresses that are most dangerous or trustworthy, or most likely a candidate for false alarms (e.g., a score with high threat score and high usage score, etc.). The usage and threat data in these data structures may then be analyzed in various manners in order to provide an end user with the best data for consumption of the data, whether it be a risk score table such as table 420 or a risk rating graphical indicator, such as in table 430, or in some other form.


Example Threat Reputation Scoring Methods



FIG. 5 is a flowchart depicting an illustrative process of determining weightings of data sources. The process of FIG. 5 may be performed by the IP reputation system 150 in response to input from one or more data sources, for example, such as a log from a VPN, an intrusion detection system, a computing device, a network device, a server, and so forth. However, the process may also be performed by other computing systems in some embodiments. Depending on the embodiment, the method of FIG. 5 may include fewer or additional blocks and the blocks may be performed in an order that is different than illustrated.


The process 500 begins at block 510, wherein threat related data is received from multiple data sources. As previously discussed, the IP reputation system 150 may calculate threat reputation scores using data from one or more data sources. One benefit of using data across a variety of data sources is that data sources have differing levels of data and accuracy levels of such data.


The process 500 then proceeds to block 520 and the accuracy of threat data from the one or more data sources are determined. In some embodiments, in order to determine accuracy of the various data sources, the IP reputation system 150 may compare data received from various data sources to identify overlaps. For example, if an intrusion detection system reports a suspicious activity from IP address 110.110.110.110 at 10:49 PM on Dec. 5, 2013, and a firewall installed on the same internal network also reports a suspicious activity from the same IP address 110.110.110.110 at 10:49 PM on Dec. 5, 2013, then it is more likely that both are accurate regarding this particular activity and IP address.


In some other embodiments, the IP reputation system 150 may compare the reported data from various data sources against known (e.g., confirmed) security threats. For example, the IP reputation system 150 may maintain a list of known security threats for a given period of time. The IP reputation system 150 may then identify the alerts as reported by various data sources relevant to the IPs in the known security threats during the same period of time. For example, a data source may provide threat data associated with a particular IP address on day 1, but that particular threat is not confirmed until day 10 (nine days after the data source originally indicated that there is a threat risk associated with the IP address, and before the threat could be confirmed). Because the data source accurately indicated a threat risk that turned into an actual threat, future data from that particular data source may be very valuable. Accordingly, the IP reputation system 150 may assign a high weighting to threat risk from that data source (or perhaps some subset of threat data from that data source, such as threat data of the same type that has been associated with later confirmed threats). Conversely, if a data source provides threat data that is never associated with an actual threat (e.g., within a predetermined time period after the threat data is received), the IP reputation system 150 may assign a lower weighting to that data source, or to some subset of threat data provided from that data source. Depending on the embodiment, weightings may be determined in real-time (e.g., each time a risk score for an IP address is requested), or in some scheduled manner, such as nightly based on new threat data received from various sources and confirmed threats that may be associated/linked to previously received threat data.


The process 500 then proceeds to block 525 wherein the IP reputation system 150 calculates weightings for respective data sources. In the calculation of threat reputation scores, the weights used for each data source i may be represented as a value ci. Depending on the embodiment, the weight for a data source may be an estimated percentage of its IP addresses that are involved in actual threats. Depending on the embodiment, the percentage may be calculated in different ways depending on the data sources. Moreover, the weights may be updated over time or as needed.


In addition, the method of how weights are calculated can be further designed to be configurable based on the type of data source. For example, for data sources that are or similar to an alerting systems (e.g., an Intrusion Detection System or IDS), there may be reported malicious IP addresses used in actual threats in the past (e.g., historical threats that are previously recorded). For known malicious IPs used in actual threats, the IP reputation system 150 may divide the number of alerts relevant to those IP addresses by the total number of alerts during the time frame of a given actual threat. This value serves as a rough “signal-to-noise” ratio that can be used as a weight. The ratio can be more accurate if more data regarding malicious IP addresses become available. Additionally, feedback from analysts who work with these alerting systems may also be considered.


For data sources such as external blacklists (e.g., Dell™ SecureWorks), the IP reputation system 150 may estimate the percentage of IP addresses that are involved in actual threats that it predicts will be involved in threats. In some embodiments, the percentage can be calculated by counting the number of IP addresses on each blacklist that appear in alerting system data sources (e.g., intrusion detection systems, SPAM in the ProofPoint enterprise email security system, etc.) during a given time interval after the blacklist was received. In some other embodiments, actual threat data, known actual attack data (such as recorded attack events that are verified) and experiment attack data (such as attacks that are initiated for purposes of analyzing a system and/or testing the alert responses) may also be used.


In some embodiments, blacklists containing IP addresses that appear in none of the accessible alerting system data sources may be given a low default weight so that they can be configured to appear in the IP address's reputation summary, but do not have a large impact on the score. This situation may occur if the blacklist was received from a source who reported IP addresses with an alerting mechanism different from any other alerting systems.


Another type of data source is internal blacklists. In some embodiments, the IP reputation system 150 may use all the IP addresses that appear in the internal blacklists and apply similar weighting methods as previously discussed regarding external blacklists to IP addresses originating from the internal blacklists. In some other embodiments, higher weights may be given to the internal blacklists because they can be considered to be more trustworthy.


The process 500 then proceeds to block 530 wherein the IP reputation system 150 calculates threat reputation scores for respective IP addresses. In some embodiments, the threat reputation score for an IP address may be calculated based on a probability of a given IP address being involved in an actual threat based on the historical accuracy of threat data sources that the IP address appears in. For example, each data source is associated with a weight, which can be an estimated percentage of its IP addresses that were actually involved in a threat. If an IP addresses is reported by multiple data sources, the probabilities may be combined to produce a final score.



FIG. 6 is a flowchart depicting an illustrative process of calculating a threat reputation score for an IP address, such as at block 530 of FIG. 5. The process of FIG. 6 may be performed by the IP reputation system 150 in response to an inquiry regarding the potential threats related to an IP address. For example, an entity may transmit a request for a threat reputation score of an IP address that is requesting access to the entity's network, such as to gauge whether or not the IP address should be blocked from the network. In some embodiments, an entity may transmit a request for generating threat reputation scores for a plurality of IP addresses that may have attempted to or have accessed its network. The request may be processed by the IP reputation system 150 in batch.


The data sources used in the process 600 may include various sources such as a log from a VPN, an intrusion detection system, a computing device, a network device, a server, and so forth. However, the process may also be performed by other computing systems in some embodiments. Depending on the embodiment, the method of FIG. 6 may include fewer or additional blocks and the blocks may be performed in an order that is different than illustrated.


The process 600 includes several blocks that are performed for each of one or more data sources reporting and threat risk for a given IP address. In particular, blocks 610-640 may be performed for each data source.


Beginning at block 610, a data source reporting one or more risks of threat associated with the IP address is identified. The data sources may be one of an alert system, external blacklist, internal blacklist, sever log, device log, and so forth. The data sources that include risk data for the IP address may be used to calculate a weight for one or more of the data sources, such as is discussed above with reference to FIG. 5.


The process 600 then proceeds to block 620, wherein the IP reputation system 150 accesses a weighting for the identified data source. In some embodiments, this can be performed through a query to the data store 160. In one embodiment, the weighting for the data source may be calculated in real-time when needed (e.g., at block 620 of FIG. 6).


The process 600 then proceeds to block 630, wherein the IP reputation system 150 accesses threat risk instances reported by the identified data source regarding the particular IP address. For example, it may be determined that an intrusion detection system reports that the IP address 110.110.110.110 appears on its list 500 times.


The process 600 then proceeds to block 640, wherein the IP reputation system 150 determines recency of each threat event. For example, for each of the 500 times that the IP address 110.110.110.110 appears in the intrusion detection system's report, a timestamp may be associated with each occurrence. The IP reputation system may calculate the difference between the current time and the time as indicated in the timestamp. Various units, such as minutes, seconds, hours, days, months, and so forth, may be used to report the difference based on the user's needs.


The process 600 then proceeds to decision block 650, wherein the IP reputation system 150 determines whether there are other data sources reporting threat risks associated with the particular IP address being considered. If the answer to the question is yes, then the process 600 proceeds repeats blocks 610-640 for each additional data source reporting threat risks for this IP address.


If the answer at decision block 650 is no, then the process 600 proceeds to block 660, wherein the IP reputation system 150 calculates a threat score for the IP address. Risk scores may be calculated in many ways using many algorithms and inputs. One example scoring method/algorithm is discussed below. In this simplified example, the IP threat reputation system 150 receives input from data sources B1 and B2. Historically, 20% of the IP addresses that each of data sources B1 and B2 predicted as future threats were actually involved in actual past threat events. Knowing this, the IP reputation system 150 may assign a weight of 0.2 to each data source, meaning there is a 20% chance that an individual IP address on either of these lists will be involved in an actual threat event. For a new IP address being investigated that appears in both B1 and B2, there is a (1−0.2)×(1−0.2)=0.64 chance the IP address will not be a real threat. Accordingly, there is a 36% chance the IP would be a real threat. In one embodiment, the IP threat reputation score may be 36%, 0.36, or some other variant of this combined probability.


Other factors such as passage of time since the occurrence of an event may be considered in generating the threat reputation score for an IP address. In some embodiments, a decay function can be used to account for passage of time. In some embodiments, a decay factor between 0 and 1 can be assigned. If the event is less recent (for example, 2 years ago), it is considered less relevant than an event that is more recent. An example decay function can be a weighted exponential decay function. In some embodiments, the following exponential decay function may be used by the IP reputation system 150: Di(t):=eC(t-t0), wherein i is an indicator of a particular data source (e.g., i may vary from 1-500), ci is a weighting for a data source, t is the time associated with a threat event, t0 is the current time, and C is a constant to limit the rate of decay. However, in other situations, other decay functions can also be used, such as a constant decay, step decay, linear decay, weibull decay, hill decay, smooth-compact decay function, and so forth.


In some embodiments, a threat reputation score may be calculated by the IP reputation system 150 as: Sn:=1−Πi=1n(1−ciDi(ti)), wherein Sn may represent the threat reputation score for an IP address considering all n occurrences of that IP address across the data sources considered by the IP reputation system 150, ci is the weight associated with each respective data source containing the ith occurrence of this IP address, ti is the time of the ith occurrence of the IP address—for example, the timestamp of an intrusion detection alert containing the IP address or the time a blacklist containing the IP address was incorporated in the IP reputation system, and Di (t) is the decay function for the data source, as discussed previously, to account for the passage of time for the ith occurrence containing this IP address.


In some other embodiments, the threat reputation score of an IP address may also be generated using a formula that is different from the one discussed above. For example, instead of exponential decay, the decay function may be configured as a constant decay (1), step decay (1 for t<L, otherwise 0), linear decay (1−t/L), weibull decay







(

e

(


-


(

t
L

)

k


×
l






og


(
2
)



)


)

,





hill decay







(

1

(

1
+



(

t
L

)




k




)

,





smooth-compact decay function







(

e

(

k
-

k

(

1
-



(

t
L

)




2





)


)

,





and so forth, wherein L is a rate of decay and k is a shape parameter. The constant value in each decay function may also be configured to differently depending on specific use cases.


Scoring methods that combine occurrences of IP addresses across many threat related data sources into a single weighted score may provide more valuable scores than a single source score. The above-noted approach gives each data source an independent, configurable weight suitable to the particular IP threat detection needs of a particular user or administrator. Additionally, multiple occurrences of an IP address in the same data source may also be considered, such that the more frequent an IP address appears, it is more likely that the threat reputation score is higher. Moreover, using this approach, older events in an IP address's history may contribute less than more recent events to the overall threat reputation score of an IP address.


Example Usage Scoring Methods



FIG. 7 is a flowchart depicting an illustrative process of calculating a usage score for an IP address. A usage score provides an indication of a level of activities that are indicative of non-threatening activities associated with an IP address. For example, if an IP address appears in VPN logs, white lists, or weblogs hitting a post-login URL, it could be used by an employee, customer, or other trusted user. In addition, if proxy server data shows that many employees are regularly connecting to an IP address that appears on a black list, it may be a sign that it is a false positive (e.g., perhaps the IP address should not be on the blacklist). Therefore, generating a separate and independent score may be useful in determining an overall reputation for IP addresses.


The process of FIG. 7 may be performed by the IP reputation system 150 in response to input from one or more data sources, for example, such as a log from a VPN, an intrusion detection system, a computing device, a network device, a server, and so forth. However, the process may also be performed by other computing systems in some embodiments. Depending on the embodiment, the method of FIG. 7 may include fewer or additional blocks and the blocks may be performed in an order that is different than illustrated. FIG. 7 includes several blocks that are performed for each of one or more data sources reporting information that is indicative of reduced risk of an actual threat for a given IP address. In particular, blocks 710-740 may be performed for each data source.


The process 700 begins at block 710, wherein a data source reporting decreased likelihood of a threat associated with a particular IP address (or range of IP addresses in some embodiments) is identified. The data source may report network usage events, such as an authorized user logging on to a VPN network, a bank customer logging into his or her banking account, a customer of a business logging into a payment system, an authorized user establishing a connection to a proxy server, and so forth. The data sources may be one of a trusted device list, a VPN log, a secure FTP server log, an authorized user data store, and so on. In addition to receiving reports of network usage events, the process 700 may also use the received reports from the data source to calculate a weight for this data source (e.g., block 720), which may be used to represent how trustworthy the sources are, as discussed previously.


The process 700 then proceeds to block 720, and weighting for the identified data source is accessed. In some embodiments, this can be performed through a query to the data store 160. In some other embodiments, the weights may be accessed directly by the IP reputation system 150 as previously calculated weight data that has already been made available to the system. In one embodiment, the weighting for the data source may be calculated in real-time when needed (e.g., at block 720 of FIG. 7).


The process 700 then proceeds to block 730, and the recency of each network usage event in data from the identified data source is determined. For example, for each of the 500 times that the IP address 110.110.110.110 appears in a VPN log, a timestamp may be associated with each occurrence. The IP reputation system may calculate the difference between the current time and the time as indicated in the timestamp. Various units, such as minutes, seconds, hours, days, months, and so forth, may be used to report the difference based on the user's needs.


The process 700 then proceeds to decision block 740 wherein the IP reputation system 150 determines whether there are still other data sources reporting decreased risks associated with the particular IP address being considered. If the answer to the question in decision block 740 is no, then the process 600 proceeds to block 660 and calculates a usage score for the IP address. If there are additional sources, blocks 710-730 are repeated for each additional data source before proceeding to block 750.


Depending on the embodiment, the usage score can be determined by the sum of all customer, employee, or other trusted usage events whose contributions to the usage score are decayed over time. In some embodiments, a similar decay rate as previously discussed for the threat reputation scores may be used. A different decay rate function may also be configured as needed. In some embodiments, the usage score can be calculated as Susage:=Σi=1nkieci(ti−t0), where ti is the time of the event, t0 is the current time, Ci is a constant used to limit the rate of decay for the data source containing the ith event, ki is an optional constant to weight occurrences in some data sources higher than others.


In some embodiments, ki may be determined based on how reliable a certain data source is. For example, if an IP address appears in the list of internal bank IP addresses, this may indicate that this IP address is more trustworthy than those IP addresses that appear in customer web sessions. Alternatively, in some other embodiments, the value of ki may be determined using an approach that is similar to the determination of the weight ci for the data source containing the ith occurrence of the IP address as discussed in FIG. 5. A weight ci for a data source can be calculated based on the percentage of IP addresses that are actually used by authorized users, trusted employees/customers, etc., as compared to the total reported IP addresses as being used by such users.


In some embodiments, in addition to the usage score calculated as above, the usage score is further normalized to constrain the score to a value between 0 and 1. Depending on the embodiment, such normalization may be achieved through a function such as:








S
^

:=


(

2
π

)



arctan


(

kS
usage

)




,





where Ŝ represents a normalized usage score, Susage is the usage score before normalization, and k is the constant to control how quickly Ŝ approaches 1. The constant k may be configured and changed by a user or an administrator based on need.


In some embodiments, calculating a separate usage score for an IP address may be favorable over combining it with the threat reputation score. If the usage score is a separate non-zero score, it may indicate immediately to a user or administrator that an investigation may be necessary to see whether an IP address is being used in a non-malicious way before potentially black listing the IP address based on a high threat reputation score. On the other hand, it may also indicate to the administrator to see whether the account of a trusted user or employee has been hacked or rendered as a dummy for attacks.


Moreover, calculating a separate usage score allows the threat reputation score to separately indicate risks associated with an IP address, without diluting with positive customer and employee activity. Finally, calculating a separate usage score may make it easier to answer a question such as “What percentage of blacklisted IP addresses are our customers' and employees' IP addresses?” This may be achieved by generating the usage score for each blacklisted IP address and see if any of the usage scores are non-zero or significantly above zero.


Example Heat Map Interface



FIG. 8 is a two-dimensional example heat map illustrating several IP addresses within a threat reputation scores and usage scores matrix. The heat map 800 includes two dimensions, the horizontal dimension representing threat score 870 and the vertical dimension representing usage score 860. The heat map may display the scores associated with a plurality of IP addresses.


Depending on the embodiment, each IP address may be represented by the reporting module 158 in the heat map 800 using its threat reputation score and usage score. As can be seen from the heat map 800, an IP address 851 with a high threat reputation score appears on the right part of the heat map 800. An IP address 852 with a low threat reputation score appears on the left part of the heat map 800. An IP address 853 associated with a high usage score usually appears on the upper part of the heat map 800, and an IP address 854 associated with a low usage score usually appears on the lower part of the heat map 800. Plotting the scores associated with a plurality of IP addresses can also demonstrate whether the scores of a particular IP address are high or low as compared to other scores and allows a user to identify areas of potential interest. For example, in one embodiment a user may be primarily interested in IP addresses associated with a high risk score and a low usage score. Accordingly, the user may look towards the lower right-hand quadrant of the heat map in order to identify IP addresses that fall within this category. Additionally, the user may easily identify clusters of IP addresses within (or across) a particular quadrant. Such clustering may be indicative of behavior that the user desires to investigate.


In some embodiments, the heat map may be resolved into specific IP addresses. For example, an IP address and its associated scores may be displayed in pop-up window 850 when a mouse hovers on top of that point in the heat map. The IP address displayed is 127.165.4.2, and it has a threat reputation score of 0.95 and usage score of 0.97. Depending on the specific instance, the scores may mean that this is a false positive because the IP address is very trustworthy and it should not have a high threat reputation score. However, the scores could also mean that a hacker is posing as a trusted user and has been involved in actual threat events. Either way, the heat map 800 may be used for recognizing noteworthy IP addresses for further analysis and also for displaying trends of possible network threats. Although a pop-up window is shown in this example, other types of user interface elements may also be used to demonstrate details regarding the scores associated with an IP address to a user.


Implementation Mechanisms


According to one embodiment, the techniques described herein are implemented by one or more special-purpose computing devices, such as the IP reputation system 150. The special-purpose computing devices may be hard-wired to perform the techniques, or may include digital electronic devices such as one or more application-specific integrated circuits (ASICs) or field programmable gate arrays (FPGAs) that are persistently programmed to perform the techniques, or may include one or more general purpose hardware processors programmed to perform the techniques pursuant to program instructions in firmware, memory, other storage, or a combination. Such special-purpose computing devices may also combine custom hard-wired logic, ASICs, or FPGAs with custom programming to accomplish the techniques. The special-purpose computing devices may be desktop computer systems, server computer systems, portable computer systems, handheld devices, networking devices or any other device or combination of devices that incorporate hard-wired and/or program logic to implement the techniques.


Computing device(s) are generally controlled and coordinated by operating system software, such as iOS, Android, Chrome OS, Windows XP, Windows Vista, Windows 7, Windows 8, Windows Server, Windows CE, Unix, Linux, SunOS, Solaris, iOS, Blackberry OS, VxWorks, or other compatible operating systems. In other embodiments, the computing device may be controlled by a proprietary operating system. Conventional operating systems control and schedule computer processes for execution, perform memory management, provide file system, networking, I/O services, and provide a user interface functionality, such as a graphical user interface (“GUI”), among other things.


For example, FIG. 9 is a block diagram that illustrates a computer system (such as the IP reputation system 150) upon which the processes discussed herein may be implemented. For example, the risk assessment 340 and the heat map interface 800 may be generated and displayed to a user by an IP reputation system 150, while a search query may be executed by another IP reputation system 150 (or possibly the same computer system in some embodiments). Furthermore the data sources may each include any portion of the components and functionality discussed with reference to the IP reputation system 150.


The IP reputation system 150 includes a bus 802 or other communication mechanism for communicating information, and a hardware processor, or multiple processors, 804 coupled with bus 802 for processing information. Hardware processor(s) 804 may be, for example, one or more general purpose microprocessors.


The IP reputation system 150 also includes a main memory 806, such as a random access memory (RAM), cache and/or other dynamic storage devices, coupled to bus 802 for storing information and instructions to be executed by processor 804. Main memory 806 also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 804. Such instructions, when stored in storage media accessible to processor 804, render IP reputation system 150 into a special-purpose machine that is customized to perform the operations specified in the instructions.


The IP reputation system 150 further includes a read only memory (ROM) 808 or other static storage device coupled to bus 802 for storing static information and instructions for processor 804. A storage device 810, such as a magnetic disk, optical disk, or USB thumb drive (Flash drive), etc., is provided and coupled to bus 802 for storing information and instructions.


The IP reputation system 150 may be coupled via bus 802 to a display 812, such as a cathode ray tube (CRT) or LCD display (or touch screen), for displaying information to a computer user. An input device 814, including alphanumeric and other keys, is coupled to bus 802 for communicating information and command selections to processor 804. Another type of user input device is cursor control 816, such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor 804 and for controlling cursor movement on display 812. This input device typically has two degrees of freedom in two axes, a first axis (e.g., x) and a second axis (e.g., y), that allows the device to specify positions in a plane. In some embodiments, the same direction information and command selections as cursor control may be implemented via receiving touches on a touch screen without a cursor.


The IP reputation system 150 may include a user interface module to implement a GUI that may be stored in a mass storage device as executable software codes that are executed by the computing device(s). This and other modules may include, by way of example, components, such as software components, object-oriented software components, class components and task components, processes, functions, attributes, procedures, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables.


In general, the word “module,” as used herein, refers to logic embodied in hardware or firmware, or to a collection of software instructions, possibly having entry and exit points, written in a programming language, such as, for example, Java, Lua, C or C++. A software module may be compiled and linked into an executable program, installed in a dynamic link library, or may be written in an interpreted programming language such as, for example, BASIC, Perl, or Python. It will be appreciated that software modules may be callable from other modules or from themselves, and/or may be invoked in response to detected events or interrupts. Software modules configured for execution on computing devices may be provided on a computer readable medium, such as a compact disc, digital video disc, flash drive, magnetic disc, or any other tangible medium, or as a digital download (and may be originally stored in a compressed or installable format that requires installation, decompression or decryption prior to execution). Such software code may be stored, partially or fully, on a memory device of the executing computing device, for execution by the computing device. Software instructions may be embedded in firmware, such as an EPROM. It will be further appreciated that hardware modules may be comprised of connected logic units, such as gates and flip-flops, and/or may be comprised of programmable units, such as programmable gate arrays or processors. The modules or computing device functionality described herein are preferably implemented as software modules, but may be represented in hardware or firmware. Generally, the modules described herein refer to logical modules that may be combined with other modules or divided into sub-modules despite their physical organization or storage


IP reputation system 150 may implement the techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which in combination with the computer system causes or programs the IP reputation system 150 to be a special-purpose machine. According to one embodiment, the techniques herein are performed by the IP reputation system 150 in response to processor(s) 804 executing one or more sequences of one or more instructions contained in main memory 806. Such instructions may be read into main memory 806 from another storage medium, such as storage device 810. Execution of the sequences of instructions contained in main memory 806 causes processor(s) 804 to perform the process steps described herein. In alternative embodiments, hard-wired circuitry may be used in place of or in combination with software instructions.


The term “non-transitory media,” and similar terms, as used herein refers to any media that store data and/or instructions that cause a machine to operate in a specific fashion. Such non-transitory media may comprise non-volatile media and/or volatile media. Non-volatile media includes, for example, optical or magnetic disks, such as storage device 810. Volatile media includes dynamic memory, such as main memory 806. Common forms of non-transitory media include, for example, a floppy disk, a flexible disk, hard disk, solid state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM, NVRAM, any other memory chip or cartridge, and networked versions of the same.


Non-transitory media is distinct from but may be used in conjunction with transmission media. Transmission media participates in transferring information between nontransitory media. For example, transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise bus 802. Transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications.


Various forms of media may be involved in carrying one or more sequences of one or more instructions to processor 804 for execution. For example, the instructions may initially be carried on a magnetic disk or solid state drive of a remote computer. The remote computer can load the instructions into its dynamic memory and send the instructions over a telephone line using a modem. A modem local to IP reputation system 150 can receive the data on the telephone line and use an infra-red transmitter to convert the data to an infra-red signal. An infra-red detector can receive the data carried in the infra-red signal and appropriate circuitry can place the data on bus 802. Bus 802 carries the data to main memory 806, from which processor 804 retrieves and executes the instructions. The instructions received by main memory 806 may retrieves and executes the instructions. The instructions received by main memory 806 may optionally be stored on storage device 810 either before or after execution by processor 804.


IP reputation system 150 also includes a communication interface 818 coupled to bus 802. Communication interface 818 provides a two-way data communication coupling to a network link 820 that is connected to a local network 822. For example, communication interface 818 may be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, communication interface 818 may be a local area network (LAN) card to provide a data communication connection to a compatible LAN (or WAN component to communicated with a WAN). Wireless links may also be implemented. In any such implementation, communication interface 818 sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information.


Network link 820 typically provides data communication through one or more networks to other data devices. For example, network link 820 may provide a connection through local network 822 to a host computer 824 or to data equipment operated by an Internet Service Provider (ISP) 826. ISP 826 in turn provides data communication services through the world wide packet data communication network now commonly referred to as the “Internet” 828. Local network 822 and Internet 828 both use electrical, electromagnetic or optical signals that carry digital data streams. The signals through the various networks and the signals on network link 820 and through communication interface 818, which carry the digital data to and from the IP reputation system 150, are example forms of transmission media.


The IP reputation system 150 can send messages and receive data, including program code, through the network(s), network link 820 and communication interface 818. In the Internet example, a server 830 might transmit a requested code for an application program through Internet 828, ISP 826, local network 822 and communication interface 818.


The received code may be executed by processor 804 as it is received, and/or stored in storage device 810, or other non-volatile storage for later execution.


Each of the processes, methods, and algorithms described in the preceding sections may be embodied in, and fully or partially automated by, code modules executed by one or more computer systems or computer processors comprising computer hardware. The processes and algorithms may be implemented partially or wholly in application-specific circuitry.


The various features and processes described above may be used independently of one another, or may be combined in various ways. All possible combinations and subcombinations are intended to fall within the scope of this disclosure. In addition, certain method or process blocks may be omitted in some implementations. The methods and processes described herein are also not limited to any particular sequence, and the blocks or states relating thereto can be performed in other sequences that are appropriate. For example, described blocks or states may be performed in an order other than that specifically disclosed, or multiple blocks or states may be combined in a single block or state. The example blocks or states may be performed in serial, in parallel, or in some other manner. Blocks or states may be added to or removed from the disclosed example embodiments. The example systems and components described herein may be configured differently than described. For example, elements may be added to, removed from, or rearranged compared to the disclosed example embodiments.


Conditional language, such as, among others, “can,” “could,” “might,” or “may,” unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or steps. Thus, such conditional language is not generally intended to imply that features, elements and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without user input or prompting, whether these features, elements and/or steps are included or are to be performed in any particular embodiment.


Any process descriptions, elements, or blocks in the flow diagrams described herein and/or depicted in the attached figures should be understood as potentially representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps in the process. Alternate implementations are included within the scope of the embodiments described herein in which elements or functions may be deleted, executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those skilled in the art.


It should be emphasized that many variations and modifications may be made to the above-described embodiments, the elements of which are to be understood as being among other acceptable examples. All such modifications and variations are intended to be included herein within the scope of this disclosure. The foregoing description details certain embodiments of the invention. It will be appreciated, however, that no matter how detailed the foregoing appears in text, the invention can be practiced in many ways. As is also stated above, it should be noted that the use of particular terminology when describing certain features or aspects of the invention should not be taken to imply that the terminology is being re-defined herein to be restricted to including any specific characteristics of the features or aspects of the invention with which that terminology is associated. The scope of the invention should therefore be construed in accordance with the appended claims and any equivalents thereof.

Claims
  • 1. A system for detecting computer network threats, the system comprising one or more computer hardware processors that execute specific code instructions to cause the system to at least: receive a network address of a computing system connected to a network attempting or requesting to access a first server connected to the network;determine a threat indicator for the network address, wherein the threat indicator indicates a risk level associated with the network address, and wherein the threat indicator is based at least in part on: a recency of historical activity associated with the network address, wherein the recency is determined by the system based at least in part on: a time associated with an activity of the network address, wherein the time is determined by the system based on at least one of the following: an amount of time between an occurrence of the network address and a current time, oran amount of time between a first occurrence of the network address and a second occurrence of the network address; anda determination regarding reliability of a data source providing some or all of the historical activity data, wherein the reliability of the data source indicates a history of the data source in previously identifying a perceived threat; andin response to determining the threat indicator, initiate an action based at least in part on the threat indicator to perform one or more of: blocking the network address, allowing the network address, or modifying a network address list.
  • 2. The system of claim 1, wherein the threat indicator for the network address is further based on a quantity of occurrences of the network address in the activity of the network address.
  • 3. The system of claim 1, wherein the reliability of the data source indicates a history of the data source in previously identifying a perceived threat that later was confirmed to be an actual threat.
  • 4. The system of claim 1, wherein the threat indicator for the network address is further based on a quantity of occurrences of network addresses associated with the network address in the activity of the network address.
  • 5. The system of claim 1, wherein the threat indicator for the network address comprises a threat score.
  • 6. The system of claim 5, wherein determining the threat indicator for the network address further comprises increasing the threat score in response to increases in a quantity of occurrences of the network address in the activity of the network address and wherein the increased threat score indicates a higher risk level associated with the network address.
  • 7. The system of claim 3, wherein the one or more computer hardware processors is further programmed, via executable code instructions, to receive the network address from a second data source, wherein receiving the network address from the second data source indicates that the network address was likely involved in a network attack; and in response to receiving the network address from the computing system connected to the network and the second data source, increasing the likelihood that the perceived threat of the network address is an actual threat.
  • 8. A computer-implemented method comprising: receiving, at a computing device, a network address of a computing system connected to a network attempting or requesting to access a first server connected to the network;determining a threat indicator for the network address, wherein the threat indicator indicates a risk level associated with the network address, and wherein the threat indicator is based at least in part on: a recency of activity of the network address based on historic activity associated with the network address, wherein the recency is determined by the system based at least in part on: a time associated with an activity of the network address, wherein the time is determined by the system based on at least one of the following: an amount of time between an occurrence of the network address and a current time, oran amount of time between a first occurrence of the network address and a second occurrence of the network address; anda determination regarding reliability of a data source providing some or all of the historical activity data, wherein the reliability of the data source indicates a history of the data source in previously identifying a perceived threat; andin response to determining the threat indicator, initiate an action based at least in part on the threat indicator to perform one or more of: blocking the network address, allowing the network address, or modifying a network address list.
  • 9. The computer-implemented method of claim 8, wherein the threat indicator is further based at least in part on a quantity of occurrences of the network address in the historic activity.
  • 10. The computer-implemented method of claim 8, wherein the threat indicator is further based at least in part on a likelihood that the network address is trustworthy, wherein the likelihood is based at least in part on records of trustworthy activity in the historical activity.
  • 11. The computer-implemented method of claim 8, wherein the historic activity is determined through records associated with at least one of: a Virtual Private Network, a firewall, or a proxy server.
  • 12. The computer-implemented method of claim 8, further comprising: identifying a network address match that at least partially matches the network address in at least one of: a trusted user list, a whitelist, employee data, or a Virtual Private Network list; andin response to identifying the network address match, decreasing the threat indicator.
  • 13. The computer-implemented method of claim 12, wherein the network address match is determined based at least in part on the network address's presence in at least one of the: a trusted user list, a whitelist, employee data, or a Virtual Private Network list.
  • 14. The computer-implemented method of claim 8, further comprising: causing presentation of the threat indicator and the network address in a user interface.
  • 15. The computer-implemented method of claim 8, wherein the threat indicator comprises a threat score.
  • 16. The computer-implemented method of claim 8, wherein determining the threat indicator for the network address further comprises assigning a weight to a threat score based at least in part on the reliability of the data source.
  • 17. A non-transitory computer-readable storage medium storing computer-executable instructions that, when executed by one or more processors, cause the processors to: receive a network address of a computing system connected to a network attempting or requesting to access a first server connected to the network;determine a threat indicator for the network address, wherein the threat indicator indicates a risk level associated with the network address, and wherein the threat indicator is based at least in part on: a recency of activity of the network address based on historic activity associated with the network address, wherein the recency is determined by the system based at least in part on: a time associated with an activity of the network address, wherein the time is determined by the system based on at least one of the following: an amount of time between an occurrence of the network address and a current time, oran amount of time between a first occurrence of the network address and a second occurrence of the network address; anda determination regarding reliability of a data source providing some or all of the historical activity data, wherein the reliability of the data source indicates a history of the data source in previously identifying a perceived threat; andin response to determining the threat indicator, initiate an action based at least in part on the determined threat indicator to perform one or more of: blocking the network address, allowing the network address, or modifying a network address list.
  • 18. The non-transitory computer-readable storage medium of claim 17, wherein the threat indicator is further based at least in part on a quantity of occurrences of the network address in the activity of the network address.
  • 19. The non-transitory computer-readable storage medium of claim 17, wherein the threat indicator is further based at least in part on a likelihood that the network address is trustworthy, wherein the likelihood is based at least in part on historical data of activities associated with the activity of the network address.
INCORPORATION BY REFERENCE TO ANY PRIORITY APPLICATIONS

Any and all applications for which a foreign or domestic priority claim is identified in the Application Data Sheet as filed with the present application are hereby incorporated by reference under 37 CFR 1.57. This application is a continuation of Ser. No. 14/816,748, filed Aug. 3, 2015, which is a continuation of Ser. No. 14/479,863, filed Sep. 8, 2014, now U.S. Pat. No. 9,100,428, which is a continuation of U.S. patent application Ser. No. 14/147,402 filed Jan. 3, 2014, now U.S. Pat. No. 8,832,832. Each of these applications are hereby incorporated by reference herein in their entireties.

US Referenced Citations (778)
Number Name Date Kind
5109399 Thompson Apr 1992 A
5329108 Lamoure Jul 1994 A
5632009 Rao et al. May 1997 A
5670987 Doi et al. Sep 1997 A
5781704 Rossmo Jul 1998 A
5798769 Chiu et al. Aug 1998 A
5845300 Comer Dec 1998 A
5978475 Schneier et al. Nov 1999 A
6057757 Arrowsmith et al. May 2000 A
6091956 Hollenberg Jul 2000 A
6161098 Wallman Dec 2000 A
6219053 Tachibana et al. Apr 2001 B1
6232971 Haynes May 2001 B1
6247019 Davies Jun 2001 B1
6253203 O'Flaherty et al. Jun 2001 B1
6279018 Kudrolli et al. Aug 2001 B1
6341310 Leshem et al. Jan 2002 B1
6366933 Ball et al. Apr 2002 B1
6369835 Lin Apr 2002 B1
6374251 Fayyad et al. Apr 2002 B1
6456997 Shukla Sep 2002 B1
6549944 Weinberg et al. Apr 2003 B1
6560620 Ching May 2003 B1
6567936 Yang et al. May 2003 B1
6581068 Bensoussan et al. Jun 2003 B1
6594672 Lampson et al. Jul 2003 B1
6631496 Li et al. Oct 2003 B1
6642945 Sharpe Nov 2003 B1
6674434 Chojnacki et al. Jan 2004 B1
6714936 Nevin, III Mar 2004 B1
6725240 Asad et al. Apr 2004 B1
6775675 Nwabueze et al. Aug 2004 B1
6807569 Bhimani et al. Oct 2004 B1
6820135 Dingman Nov 2004 B1
6828920 Owen et al. Dec 2004 B2
6839745 Dingari et al. Jan 2005 B1
6877137 Rivette et al. Apr 2005 B1
6976210 Silva et al. Dec 2005 B1
6978419 Kantrowitz Dec 2005 B1
6980984 Huffman et al. Dec 2005 B1
6985950 Hanson et al. Jan 2006 B1
7017046 Doyle et al. Mar 2006 B2
7036085 Barros Apr 2006 B2
7043702 Chi et al. May 2006 B2
7055110 Kupka et al. May 2006 B2
7069586 Winneg et al. Jun 2006 B1
7139800 Bellotti et al. Nov 2006 B2
7158878 Rasmussen et al. Jan 2007 B2
7162475 Ackerman Jan 2007 B2
7168039 Bertram Jan 2007 B2
7171427 Witowski et al. Jan 2007 B2
7225468 Waisman et al. May 2007 B2
7269786 Malloy et al. Sep 2007 B1
7278105 Kitts Oct 2007 B1
7290698 Poslinski et al. Nov 2007 B2
7333998 Heckerman et al. Feb 2008 B2
7370047 Gorman May 2008 B2
7373669 Eisen May 2008 B2
7379811 Rasmussen et al. May 2008 B2
7379903 Caballero et al. May 2008 B2
7426654 Adams et al. Sep 2008 B2
7451397 Weber et al. Nov 2008 B2
7454466 Bellotti et al. Nov 2008 B2
7467375 Tondreau et al. Dec 2008 B2
7487139 Fraleigh et al. Feb 2009 B2
7502786 Liu et al. Mar 2009 B2
7525422 Bishop et al. Apr 2009 B2
7529727 Arning et al. May 2009 B2
7529734 Dirisala May 2009 B2
7546245 Surpin et al. Jun 2009 B2
7558677 Jones Jul 2009 B2
7574409 Patinkin Aug 2009 B2
7574428 Leiserowitz et al. Aug 2009 B2
7579965 Bucholz Aug 2009 B2
7593995 He et al. Sep 2009 B1
7596285 Brown et al. Sep 2009 B2
7614006 Molander Nov 2009 B2
7617232 Gabbert et al. Nov 2009 B2
7620628 Kapur et al. Nov 2009 B2
7627812 Chamberlain et al. Dec 2009 B2
7634717 Chamberlain et al. Dec 2009 B2
7640173 Surpin et al. Dec 2009 B2
7703021 Flam Apr 2010 B1
7706817 Bamrah et al. Apr 2010 B2
7712049 Williams et al. May 2010 B2
7716067 Surpin et al. May 2010 B2
7716077 Mikurak May 2010 B1
7725530 Sah et al. May 2010 B2
7725547 Albertson et al. May 2010 B2
7730082 Sah et al. Jun 2010 B2
7730109 Rohrs et al. Jun 2010 B2
7752665 Robertson et al. Jul 2010 B1
7770032 Nesta et al. Aug 2010 B2
7770100 Chamberlain et al. Aug 2010 B2
7783658 Bayliss Aug 2010 B1
7801871 Gosnell Sep 2010 B2
7805457 Viola et al. Sep 2010 B1
7809703 Balabhadrapatruni et al. Oct 2010 B2
7814102 Miller et al. Oct 2010 B2
7818658 Chen Oct 2010 B2
7870493 Pall et al. Jan 2011 B2
7894984 Rasmussen et al. Feb 2011 B2
7899611 Downs et al. Mar 2011 B2
7917376 Bellin et al. Mar 2011 B2
7920963 Jouline et al. Apr 2011 B2
7933862 Chamberlain et al. Apr 2011 B2
7941321 Greenstein et al. May 2011 B2
7962281 Rasmussen et al. Jun 2011 B2
7962495 Jain et al. Jun 2011 B2
7962848 Bertram Jun 2011 B2
7970240 Chao et al. Jun 2011 B1
7971150 Raskutti et al. Jun 2011 B2
7984374 Caro et al. Jun 2011 B2
8001465 Kudrolli et al. Aug 2011 B2
8001482 Bhattiprolu et al. Aug 2011 B2
8010545 Stefik et al. Aug 2011 B2
8010886 Gusmorino et al. Aug 2011 B2
8015487 Roy et al. Sep 2011 B2
8019709 Norton et al. Sep 2011 B2
8024778 Cash et al. Sep 2011 B2
8036632 Cona et al. Oct 2011 B1
8036971 Aymeloglu et al. Oct 2011 B2
8046283 Burns Oct 2011 B2
8046362 Bayliss Oct 2011 B2
8054756 Chand et al. Nov 2011 B2
8082172 Chao et al. Dec 2011 B2
8103543 Zwicky Jan 2012 B1
8134457 Velipasalar et al. Mar 2012 B2
8135679 Bayliss Mar 2012 B2
8135719 Bayliss Mar 2012 B2
8145703 Frishert et al. Mar 2012 B2
8181253 Zaitsev et al. May 2012 B1
8185819 Sah et al. May 2012 B2
8190893 Benson et al. May 2012 B2
8196184 Amirov et al. Jun 2012 B2
8214361 Sandler et al. Jul 2012 B1
8214490 Vos et al. Jul 2012 B1
8214764 Gemmell et al. Jul 2012 B2
8225201 Michael Jul 2012 B2
8229902 Vishniac et al. Jul 2012 B2
8229947 Fujinaga Jul 2012 B2
8230333 Decherd et al. Jul 2012 B2
8239668 Chen et al. Aug 2012 B1
8266168 Bayliss Sep 2012 B2
8271461 Pike et al. Sep 2012 B2
8271598 Guy et al. Sep 2012 B2
8280880 Aymeloglu et al. Oct 2012 B1
8290926 Ozzie et al. Oct 2012 B2
8290942 Jones et al. Oct 2012 B2
8301464 Cave et al. Oct 2012 B1
8301904 Gryaznov Oct 2012 B1
8302855 Ma et al. Nov 2012 B2
8312367 Foster Nov 2012 B2
8312546 Alme Nov 2012 B2
8321943 Walters et al. Nov 2012 B1
8347398 Weber Jan 2013 B1
8352881 Champion et al. Jan 2013 B2
8368695 Howell et al. Feb 2013 B2
8397171 Klassen et al. Mar 2013 B2
8411046 Kruzeniski et al. Apr 2013 B2
8412707 Mianji Apr 2013 B1
8447674 Choudhuri et al. May 2013 B2
8447722 Ahuja et al. May 2013 B1
8452790 Mianji May 2013 B1
8463036 Ramesh et al. Jun 2013 B1
8473454 Evanitsky et al. Jun 2013 B2
8484115 Aymeloglu et al. Jul 2013 B2
8484168 Bayliss Jul 2013 B2
8489331 Kopf et al. Jul 2013 B2
8489641 Seefeld et al. Jul 2013 B1
8495077 Bayliss Jul 2013 B2
8498969 Bayliss Jul 2013 B2
8498984 Hwang et al. Jul 2013 B1
8510743 Hackborn et al. Aug 2013 B2
8514082 Cova et al. Aug 2013 B2
8515207 Chau Aug 2013 B2
8554579 Tribble et al. Oct 2013 B2
8554653 Falkenborg et al. Oct 2013 B2
8554709 Goodson et al. Oct 2013 B2
8560413 Quarterman Oct 2013 B1
8577911 Stepinski et al. Nov 2013 B1
8589273 Creeden et al. Nov 2013 B2
8595234 Siripuapu et al. Nov 2013 B2
8600872 Yan Dec 2013 B1
8620641 Farnsworth et al. Dec 2013 B2
8639757 Zang et al. Jan 2014 B1
8646080 Williamson et al. Feb 2014 B2
8676597 Buehler et al. Mar 2014 B2
8676857 Adams et al. Mar 2014 B1
8682812 Ranjan Mar 2014 B1
8683322 Cooper Mar 2014 B1
8689108 Duffield et al. Apr 2014 B1
8700547 Long et al. Apr 2014 B2
8707185 Robinson et al. Apr 2014 B2
8713018 Knight et al. Apr 2014 B2
8713467 Goldenberg et al. Apr 2014 B1
8726379 Stiansen et al. May 2014 B1
8739278 Varghese May 2014 B2
8742934 Sarpy et al. Jun 2014 B1
8744890 Bernier Jun 2014 B1
8745516 Mason et al. Jun 2014 B2
8756244 Dassa et al. Jun 2014 B2
8769412 Gill et al. Jul 2014 B2
8781169 Jackson et al. Jul 2014 B2
8782794 Ramcharran Jul 2014 B2
8787939 Papakipos et al. Jul 2014 B2
8788405 Sprague et al. Jul 2014 B1
8788407 Singh et al. Jul 2014 B1
8799190 Stokes et al. Aug 2014 B2
8799799 Cervelli et al. Aug 2014 B1
8799812 Parker Aug 2014 B2
8812960 Sun et al. Aug 2014 B1
8813050 Watters et al. Aug 2014 B2
8818892 Sprague et al. Aug 2014 B1
8830322 Nerayoff et al. Sep 2014 B2
8832594 Thompson et al. Sep 2014 B1
8832832 Visbal Sep 2014 B1
8839434 McDougal et al. Sep 2014 B2
8868486 Tamayo Oct 2014 B2
8868537 Colgrove et al. Oct 2014 B1
8917274 Ma et al. Dec 2014 B2
8924388 Elliot et al. Dec 2014 B2
8924389 Elliot et al. Dec 2014 B2
8924872 Bogomolov et al. Dec 2014 B1
8937619 Sharma et al. Jan 2015 B2
8938686 Erenrich et al. Jan 2015 B1
9009171 Grossman et al. Apr 2015 B1
9009827 Albertson et al. Apr 2015 B1
9021260 Falk et al. Apr 2015 B1
9021384 Beard et al. Apr 2015 B1
9043696 Meiklejohn et al. May 2015 B1
9043894 Dennison et al. May 2015 B1
9047441 Xie et al. Jun 2015 B2
9049117 Nucci et al. Jun 2015 B1
9100428 Visbal Aug 2015 B1
9116975 Shankar et al. Aug 2015 B2
9135658 Sprague et al. Sep 2015 B2
9165299 Stowe et al. Oct 2015 B1
9171334 Visbal et al. Oct 2015 B1
9177014 Gross Nov 2015 B2
9177344 Singh et al. Nov 2015 B1
9202249 Cohen et al. Dec 2015 B1
9215240 Merza Dec 2015 B2
9230280 Maag et al. Jan 2016 B1
9235638 Gattiker et al. Jan 2016 B2
9256664 Chakerian et al. Feb 2016 B2
9335897 Goldenberg May 2016 B2
9338013 Castellucci et al. May 2016 B2
9344447 Cohen et al. May 2016 B2
9367872 Visbal et al. Jun 2016 B1
9558352 Dennison et al. Jan 2017 B1
9560066 Visbal Jan 2017 B2
9635046 Spiro et al. Apr 2017 B2
20010021936 Bertram Sep 2001 A1
20020033848 Sciammarella et al. Mar 2002 A1
20020065708 Senay et al. May 2002 A1
20020091707 Keller Jul 2002 A1
20020095360 Joao Jul 2002 A1
20020095658 Shulman Jul 2002 A1
20020103705 Brady Aug 2002 A1
20020112157 Doyle et al. Aug 2002 A1
20020116120 Ruiz et al. Aug 2002 A1
20020130907 Chi et al. Sep 2002 A1
20020147805 Leshem et al. Oct 2002 A1
20020174201 Ramer et al. Nov 2002 A1
20020194119 Wright et al. Dec 2002 A1
20030028560 Kudrolli et al. Feb 2003 A1
20030033228 Bosworth-Davies et al. Feb 2003 A1
20030039948 Donahue Feb 2003 A1
20030074368 Schuetze et al. Apr 2003 A1
20030097330 Hillmer et al. May 2003 A1
20030140106 Raguseo Jul 2003 A1
20030144868 MacIntyre et al. Jul 2003 A1
20030154044 Lundstedt et al. Aug 2003 A1
20030163352 Surpin et al. Aug 2003 A1
20030200217 Ackerman Oct 2003 A1
20030225755 Iwayama et al. Dec 2003 A1
20030229848 Arend et al. Dec 2003 A1
20040032432 Baynger Feb 2004 A1
20040034570 Davis Feb 2004 A1
20040044912 Connary Mar 2004 A1
20040064256 Barinek et al. Apr 2004 A1
20040085318 Hassler et al. May 2004 A1
20040095349 Bito et al. May 2004 A1
20040111410 Burgoon et al. Jun 2004 A1
20040123139 Aiello et al. Jun 2004 A1
20040126840 Cheng et al. Jul 2004 A1
20040143602 Ruiz et al. Jul 2004 A1
20040143796 Lerner et al. Jul 2004 A1
20040153418 Hanweck Aug 2004 A1
20040163039 McPherson et al. Aug 2004 A1
20040181554 Heckerman et al. Sep 2004 A1
20040193600 Kaasten et al. Sep 2004 A1
20040205524 Richter et al. Oct 2004 A1
20040221223 Yu et al. Nov 2004 A1
20040250124 Chesla et al. Dec 2004 A1
20040260702 Cragun et al. Dec 2004 A1
20040267746 Marcjan et al. Dec 2004 A1
20050010472 Ouatse et al. Jan 2005 A1
20050027705 Sadri et al. Feb 2005 A1
20050028094 Allyn Feb 2005 A1
20050039119 Parks et al. Feb 2005 A1
20050065811 Chu et al. Mar 2005 A1
20050080769 Gemmell Apr 2005 A1
20050086207 Heuer et al. Apr 2005 A1
20050108063 Madill et al. May 2005 A1
20050125715 Franco et al. Jun 2005 A1
20050157662 Bingham et al. Jul 2005 A1
20050162523 Darrell et al. Jul 2005 A1
20050166144 Gross Jul 2005 A1
20050180330 Shapiro Aug 2005 A1
20050182793 Keenan et al. Aug 2005 A1
20050183005 Denoue et al. Aug 2005 A1
20050204001 Stein et al. Sep 2005 A1
20050210409 Jou Sep 2005 A1
20050222928 Steier et al. Oct 2005 A1
20050229256 Banzhof Oct 2005 A2
20050246327 Yeung et al. Nov 2005 A1
20050251786 Citron et al. Nov 2005 A1
20050262556 Waisman et al. Nov 2005 A1
20050275638 Kolmykov-Zotov et al. Dec 2005 A1
20060026120 Carolan et al. Feb 2006 A1
20060026170 Kreitler et al. Feb 2006 A1
20060026688 Shah Feb 2006 A1
20060031928 Conley et al. Feb 2006 A1
20060045470 Poslinski et al. Mar 2006 A1
20060059139 Robinson Mar 2006 A1
20060059238 Slater Mar 2006 A1
20060069912 Zheng et al. Mar 2006 A1
20060074866 Chamberlain et al. Apr 2006 A1
20060074881 Vembu et al. Apr 2006 A1
20060080619 Carlson et al. Apr 2006 A1
20060093222 Saffer et al. May 2006 A1
20060095521 Patinkin May 2006 A1
20060129746 Porter Jun 2006 A1
20060139375 Rasmussen et al. Jun 2006 A1
20060142949 Helt Jun 2006 A1
20060143034 Rothermel Jun 2006 A1
20060143075 Carr et al. Jun 2006 A1
20060143079 Basak et al. Jun 2006 A1
20060149596 Surpin et al. Jul 2006 A1
20060179003 Steele et al. Aug 2006 A1
20060203337 White Sep 2006 A1
20060212931 Shull Sep 2006 A1
20060218637 Thomas et al. Sep 2006 A1
20060241974 Chao et al. Oct 2006 A1
20060242040 Rader Oct 2006 A1
20060242630 Koike et al. Oct 2006 A1
20060265747 Judge Nov 2006 A1
20060271277 Hu et al. Nov 2006 A1
20060279630 Aggarwal et al. Dec 2006 A1
20070000999 Kubo et al. Jan 2007 A1
20070011150 Frank Jan 2007 A1
20070011304 Error Jan 2007 A1
20070016363 Huang et al. Jan 2007 A1
20070038646 Thota Feb 2007 A1
20070038962 Fuchs et al. Feb 2007 A1
20070057966 Ohno et al. Mar 2007 A1
20070078832 Ott et al. Apr 2007 A1
20070083541 Fraleigh et al. Apr 2007 A1
20070094389 Nussey et al. Apr 2007 A1
20070094500 Shannon et al. Apr 2007 A1
20070106582 Baker et al. May 2007 A1
20070143851 Nicodemus Jun 2007 A1
20070150369 Zivin Jun 2007 A1
20070174760 Chamberlain et al. Jul 2007 A1
20070192265 Chopin et al. Aug 2007 A1
20070198571 Ferguson et al. Aug 2007 A1
20070208497 Downs et al. Sep 2007 A1
20070208498 Barker et al. Sep 2007 A1
20070208736 Tanigawa et al. Sep 2007 A1
20070233709 Abnous Oct 2007 A1
20070240062 Christena et al. Oct 2007 A1
20070266336 Nojima et al. Nov 2007 A1
20070284433 Domenica et al. Dec 2007 A1
20070294200 Au Dec 2007 A1
20070294643 Kyle Dec 2007 A1
20070294766 Mir et al. Dec 2007 A1
20080016216 Worley et al. Jan 2008 A1
20080040684 Crump Feb 2008 A1
20080051989 Welsh Feb 2008 A1
20080052142 Bailey et al. Feb 2008 A1
20080069081 Chand et al. Mar 2008 A1
20080077597 Butler Mar 2008 A1
20080077642 Carbone et al. Mar 2008 A1
20080082486 Lermant et al. Apr 2008 A1
20080104019 Nath May 2008 A1
20080104407 Horne et al. May 2008 A1
20080126951 Sood et al. May 2008 A1
20080133567 Ames et al. Jun 2008 A1
20080148398 Mezack et al. Jun 2008 A1
20080155440 Trevor et al. Jun 2008 A1
20080162616 Gross et al. Jul 2008 A1
20080175266 Alperovitch et al. Jul 2008 A1
20080195417 Surpin et al. Aug 2008 A1
20080195608 Clover Aug 2008 A1
20080201580 Savitzky et al. Aug 2008 A1
20080222295 Robinson et al. Sep 2008 A1
20080222706 Renaud et al. Sep 2008 A1
20080229422 Hudis et al. Sep 2008 A1
20080255973 El Wade et al. Oct 2008 A1
20080263468 Cappione et al. Oct 2008 A1
20080267107 Rosenberg Oct 2008 A1
20080276167 Michael Nov 2008 A1
20080278311 Grange et al. Nov 2008 A1
20080288306 MacIntyre et al. Nov 2008 A1
20080288425 Posse et al. Nov 2008 A1
20080301643 Appleton et al. Dec 2008 A1
20080313132 Hao et al. Dec 2008 A1
20090002492 Velipasalar et al. Jan 2009 A1
20090018940 Wang et al. Jan 2009 A1
20090024505 Patel et al. Jan 2009 A1
20090027418 Maru et al. Jan 2009 A1
20090030915 Winter et al. Jan 2009 A1
20090044279 Crawford et al. Feb 2009 A1
20090055251 Shah et al. Feb 2009 A1
20090076845 Bellin et al. Mar 2009 A1
20090082997 Tokman et al. Mar 2009 A1
20090083184 Eisen Mar 2009 A1
20090088964 Schaaf et al. Apr 2009 A1
20090094166 Aymeloglu et al. Apr 2009 A1
20090103442 Douville Apr 2009 A1
20090106178 Chu Apr 2009 A1
20090112745 Stefanescu Apr 2009 A1
20090119309 Gibson et al. May 2009 A1
20090125359 Knapic May 2009 A1
20090125369 Kloosstra et al. May 2009 A1
20090125459 Norton et al. May 2009 A1
20090132921 Hwangbo et al. May 2009 A1
20090132953 Reed et al. May 2009 A1
20090143052 Bates et al. Jun 2009 A1
20090144262 White et al. Jun 2009 A1
20090144274 Fraleigh et al. Jun 2009 A1
20090164934 Bhattiprolu et al. Jun 2009 A1
20090171939 Athsani et al. Jul 2009 A1
20090172511 Decherd et al. Jul 2009 A1
20090172821 Daira et al. Jul 2009 A1
20090177962 Gusmorino et al. Jul 2009 A1
20090179892 Tsuda et al. Jul 2009 A1
20090187464 Bai et al. Jul 2009 A1
20090187546 Whyte et al. Jul 2009 A1
20090187548 Ji et al. Jul 2009 A1
20090192957 Subramanian et al. Jul 2009 A1
20090222400 Kupershmidt et al. Sep 2009 A1
20090222759 Drieschner Sep 2009 A1
20090222760 Halverson et al. Sep 2009 A1
20090228701 Lin Sep 2009 A1
20090234720 George et al. Sep 2009 A1
20090249244 Robinson et al. Oct 2009 A1
20090254970 Agarwal et al. Oct 2009 A1
20090254971 Herz Oct 2009 A1
20090271343 Vaiciulis et al. Oct 2009 A1
20090271359 Bayliss Oct 2009 A1
20090281839 Lynn et al. Nov 2009 A1
20090287470 Farnsworth et al. Nov 2009 A1
20090292626 Oxford Nov 2009 A1
20090300589 Watters et al. Dec 2009 A1
20090313463 Pang et al. Dec 2009 A1
20090318775 Michelson et al. Dec 2009 A1
20090319418 Herz Dec 2009 A1
20090328222 Heiman et al. Dec 2009 A1
20100011282 Dollard et al. Jan 2010 A1
20100042922 Bradateanu et al. Feb 2010 A1
20100057622 Faith et al. Mar 2010 A1
20100057716 Stefik et al. Mar 2010 A1
20100070523 Delgo et al. Mar 2010 A1
20100070842 Aymeloglu et al. Mar 2010 A1
20100070845 Facemire et al. Mar 2010 A1
20100070897 Aymeloglu et al. Mar 2010 A1
20100077481 Polyakov et al. Mar 2010 A1
20100077483 Stolfo et al. Mar 2010 A1
20100098318 Anderson Apr 2010 A1
20100100963 Mahaffey Apr 2010 A1
20100103124 Kruzeniski et al. Apr 2010 A1
20100106611 Paulsen et al. Apr 2010 A1
20100114887 Conway et al. May 2010 A1
20100122152 Chamberlain et al. May 2010 A1
20100125546 Barrett et al. May 2010 A1
20100131457 Heimendinger May 2010 A1
20100162176 Dunton Jun 2010 A1
20100169237 Howard et al. Jul 2010 A1
20100185691 Irmak et al. Jul 2010 A1
20100191563 Schlaifer et al. Jul 2010 A1
20100198684 Eraker et al. Aug 2010 A1
20100199225 Coleman et al. Aug 2010 A1
20100211578 Lundberg et al. Aug 2010 A1
20100228812 Uomini Sep 2010 A1
20100235915 Memon et al. Sep 2010 A1
20100250412 Wagner Sep 2010 A1
20100262688 Hussain et al. Oct 2010 A1
20100280857 Liu et al. Nov 2010 A1
20100293174 Bennett et al. Nov 2010 A1
20100306029 Jolley Dec 2010 A1
20100306713 Geisner et al. Dec 2010 A1
20100313119 Baldwin et al. Dec 2010 A1
20100318924 Frankel et al. Dec 2010 A1
20100321399 Ellren et al. Dec 2010 A1
20100325526 Ellis et al. Dec 2010 A1
20100325581 Finkelstein et al. Dec 2010 A1
20100330801 Rouh Dec 2010 A1
20110029526 Knight et al. Feb 2011 A1
20110047159 Baid et al. Feb 2011 A1
20110055140 Roychowdhury Mar 2011 A1
20110060753 Shaked et al. Mar 2011 A1
20110060910 Gormish et al. Mar 2011 A1
20110061013 Bilicki et al. Mar 2011 A1
20110066933 Ludwig Mar 2011 A1
20110074811 Hanson et al. Mar 2011 A1
20110078055 Faribault et al. Mar 2011 A1
20110078173 Seligmann et al. Mar 2011 A1
20110087519 Fordyce, III et al. Apr 2011 A1
20110093327 Fordyce, III et al. Apr 2011 A1
20110099133 Chang et al. Apr 2011 A1
20110117878 Barash et al. May 2011 A1
20110119100 Ruhl et al. May 2011 A1
20110131122 Griffin et al. Jun 2011 A1
20110137766 Rasmussen et al. Jun 2011 A1
20110153384 Horne et al. Jun 2011 A1
20110161096 Buehler et al. Jun 2011 A1
20110167054 Bailey et al. Jul 2011 A1
20110167105 Ramakrishnan et al. Jul 2011 A1
20110167493 Song et al. Jul 2011 A1
20110170799 Carrino et al. Jul 2011 A1
20110173032 Payne et al. Jul 2011 A1
20110173093 Psota et al. Jul 2011 A1
20110178842 Rane et al. Jul 2011 A1
20110185316 Reid et al. Jul 2011 A1
20110202555 Cordover et al. Aug 2011 A1
20110208724 Jones et al. Aug 2011 A1
20110213655 Henkin Sep 2011 A1
20110218934 Elser Sep 2011 A1
20110219450 McDougal et al. Sep 2011 A1
20110225198 Edwards et al. Sep 2011 A1
20110225650 Margolies et al. Sep 2011 A1
20110231223 Winters Sep 2011 A1
20110238495 Kang Sep 2011 A1
20110238510 Rowen et al. Sep 2011 A1
20110238553 Raj et al. Sep 2011 A1
20110238570 Li et al. Sep 2011 A1
20110246229 Pacha Oct 2011 A1
20110251951 Kolkowtiz Oct 2011 A1
20110258158 Resende et al. Oct 2011 A1
20110270604 Qi et al. Nov 2011 A1
20110270705 Parker Nov 2011 A1
20110289397 Eastmond et al. Nov 2011 A1
20110289407 Naik et al. Nov 2011 A1
20110289420 Morioka et al. Nov 2011 A1
20110291851 Whisenant Dec 2011 A1
20110307382 Siegel et al. Dec 2011 A1
20110310005 Chen et al. Dec 2011 A1
20110314007 Dassa et al. Dec 2011 A1
20110314546 Aziz et al. Dec 2011 A1
20120004904 Shin et al. Jan 2012 A1
20120011238 Rathod Jan 2012 A1
20120019559 Siler et al. Jan 2012 A1
20120036013 Neuhaus et al. Feb 2012 A1
20120036434 Oberstein Feb 2012 A1
20120050293 Carlhian et al. Mar 2012 A1
20120066166 Curbera et al. Mar 2012 A1
20120066296 Appleton et al. Mar 2012 A1
20120072825 Sherkin et al. Mar 2012 A1
20120079363 Folting et al. Mar 2012 A1
20120079592 Pandrangi Mar 2012 A1
20120084118 Bai et al. Apr 2012 A1
20120084135 Nissan et al. Apr 2012 A1
20120084866 Stolfo Apr 2012 A1
20120106801 Jackson May 2012 A1
20120110633 An et al. May 2012 A1
20120110674 Belani et al. May 2012 A1
20120117082 Koperda et al. May 2012 A1
20120131107 Yost May 2012 A1
20120131512 Takeuchi et al. May 2012 A1
20120137235 Ts et al. May 2012 A1
20120144335 Abeln et al. Jun 2012 A1
20120158626 Zhu et al. Jun 2012 A1
20120159307 Chung et al. Jun 2012 A1
20120159362 Brown et al. Jun 2012 A1
20120159399 Bastide et al. Jun 2012 A1
20120169593 Mak et al. Jul 2012 A1
20120170847 Tsukidate Jul 2012 A1
20120173381 Smith Jul 2012 A1
20120173985 Peppel Jul 2012 A1
20120180002 Campbell et al. Jul 2012 A1
20120196557 Reich et al. Aug 2012 A1
20120196558 Reich et al. Aug 2012 A1
20120197651 Robinson et al. Aug 2012 A1
20120203708 Psota et al. Aug 2012 A1
20120208636 Feige Aug 2012 A1
20120215898 Shah et al. Aug 2012 A1
20120218305 Patterson et al. Aug 2012 A1
20120221511 Gibson et al. Aug 2012 A1
20120221553 Wittmer et al. Aug 2012 A1
20120221580 Barney Aug 2012 A1
20120245976 Kumar et al. Sep 2012 A1
20120246148 Dror Sep 2012 A1
20120254129 Wheeler et al. Oct 2012 A1
20120254947 Dheap Oct 2012 A1
20120266245 McDougal et al. Oct 2012 A1
20120284345 Costenaro et al. Nov 2012 A1
20120284791 Miller et al. Nov 2012 A1
20120290879 Shibuya et al. Nov 2012 A1
20120296907 Long et al. Nov 2012 A1
20120304244 Xie et al. Nov 2012 A1
20120310831 Harris et al. Dec 2012 A1
20120310838 Harris et al. Dec 2012 A1
20120311684 Paulsen et al. Dec 2012 A1
20120323829 Stokes et al. Dec 2012 A1
20120323888 Osann, Jr. Dec 2012 A1
20120330801 McDougal et al. Dec 2012 A1
20120330973 Ghuneim et al. Dec 2012 A1
20130006426 Healey et al. Jan 2013 A1
20130006655 Van Arkel et al. Jan 2013 A1
20130006668 Van Arkel et al. Jan 2013 A1
20130006725 Simanek et al. Jan 2013 A1
20130006916 McBride et al. Jan 2013 A1
20130018796 Kolhatkar et al. Jan 2013 A1
20130019306 Lagar-Cavilla et al. Jan 2013 A1
20130024268 Manickavelu Jan 2013 A1
20130024307 Fuerstenberg et al. Jan 2013 A1
20130024339 Choudhuri et al. Jan 2013 A1
20130046635 Grigg et al. Feb 2013 A1
20130046842 Muntz et al. Feb 2013 A1
20130060786 Serrano et al. Mar 2013 A1
20130061169 Pearcy et al. Mar 2013 A1
20130073377 Heath Mar 2013 A1
20130073454 Busch Mar 2013 A1
20130078943 Biage et al. Mar 2013 A1
20130086482 Parsons Apr 2013 A1
20130096988 Grossman et al. Apr 2013 A1
20130097482 Marantz et al. Apr 2013 A1
20130097709 Basavapatna et al. Apr 2013 A1
20130101159 Chao et al. Apr 2013 A1
20130110822 Ikeda et al. May 2013 A1
20130110876 Meijer et al. May 2013 A1
20130110877 Bonham et al. May 2013 A1
20130111320 Campbell et al. May 2013 A1
20130117651 Waldman et al. May 2013 A1
20130139261 Friedrichs et al. May 2013 A1
20130139268 An et al. May 2013 A1
20130150004 Rosen Jun 2013 A1
20130151148 Parundekar et al. Jun 2013 A1
20130151388 Falkenborg et al. Jun 2013 A1
20130151453 Bhanot et al. Jun 2013 A1
20130157234 Gulli et al. Jun 2013 A1
20130160120 Malaviya et al. Jun 2013 A1
20130166480 Popescu et al. Jun 2013 A1
20130166550 Buchmann et al. Jun 2013 A1
20130176321 Mitchell et al. Jul 2013 A1
20130179420 Park et al. Jul 2013 A1
20130185307 El-Yaniv et al. Jul 2013 A1
20130185320 Iwasaki et al. Jul 2013 A1
20130197925 Blue Aug 2013 A1
20130211985 Clark et al. Aug 2013 A1
20130224696 Wolfe et al. Aug 2013 A1
20130225212 Khan Aug 2013 A1
20130226318 Procyk Aug 2013 A1
20130226953 Markovich et al. Aug 2013 A1
20130232045 Tai et al. Sep 2013 A1
20130238616 Rose et al. Sep 2013 A1
20130239217 Kindler et al. Sep 2013 A1
20130246170 Gross et al. Sep 2013 A1
20130251233 Yang et al. Sep 2013 A1
20130262527 Hunter et al. Oct 2013 A1
20130263019 Castellanos et al. Oct 2013 A1
20130267207 Hao et al. Oct 2013 A1
20130268520 Fisher et al. Oct 2013 A1
20130268994 Cooper Oct 2013 A1
20130276799 Davidson Oct 2013 A1
20130279757 Kephart Oct 2013 A1
20130282696 John et al. Oct 2013 A1
20130290011 Lynn et al. Oct 2013 A1
20130290825 Arndt et al. Oct 2013 A1
20130297619 Chandrasekaran et al. Nov 2013 A1
20130311375 Priebatsch Nov 2013 A1
20130318594 Hoy et al. Nov 2013 A1
20130339218 Subramanian et al. Dec 2013 A1
20130339514 Crank et al. Dec 2013 A1
20140006109 Callioni et al. Jan 2014 A1
20140012796 Petersen et al. Jan 2014 A1
20140013451 Kulka et al. Jan 2014 A1
20140019936 Cohanoff Jan 2014 A1
20140032506 Hoey et al. Jan 2014 A1
20140033010 Richardt et al. Jan 2014 A1
20140040371 Gurevich et al. Feb 2014 A1
20140047319 Eberlein Feb 2014 A1
20140047357 Alfaro et al. Feb 2014 A1
20140058763 Zizzamia et al. Feb 2014 A1
20140059038 McPherson et al. Feb 2014 A1
20140059683 Ashley Feb 2014 A1
20140067611 Adachi et al. Mar 2014 A1
20140068487 Steiger et al. Mar 2014 A1
20140074855 Zhao et al. Mar 2014 A1
20140081652 Klindworth Mar 2014 A1
20140095273 Tang et al. Apr 2014 A1
20140095509 Patton Apr 2014 A1
20140108068 Williams Apr 2014 A1
20140108380 Gotz et al. Apr 2014 A1
20140108985 Scott et al. Apr 2014 A1
20140123279 Bishop et al. May 2014 A1
20140129261 Bothwell et al. May 2014 A1
20140136285 Carvalho May 2014 A1
20140143009 Brice et al. May 2014 A1
20140149130 Getchius May 2014 A1
20140149272 Hirani et al. May 2014 A1
20140149436 Bahrami et al. May 2014 A1
20140156484 Chan et al. Jun 2014 A1
20140156527 Grigg et al. Jun 2014 A1
20140157172 Peery et al. Jun 2014 A1
20140164502 Khodorenko et al. Jun 2014 A1
20140173712 Ferdinand Jun 2014 A1
20140173738 Condry et al. Jun 2014 A1
20140188895 Wang et al. Jul 2014 A1
20140189536 Lange et al. Jul 2014 A1
20140195515 Baker et al. Jul 2014 A1
20140195887 Ellis et al. Jul 2014 A1
20140214579 Shen et al. Jul 2014 A1
20140222521 Chait Aug 2014 A1
20140222793 Sadkin et al. Aug 2014 A1
20140229422 Jain et al. Aug 2014 A1
20140244388 Manouchehri et al. Aug 2014 A1
20140267294 Ma Sep 2014 A1
20140267295 Sharma Sep 2014 A1
20140279824 Tamayo Sep 2014 A1
20140283067 Call et al. Sep 2014 A1
20140283107 Walton et al. Sep 2014 A1
20140310266 Greenfield Oct 2014 A1
20140310282 Sprague et al. Oct 2014 A1
20140316911 Gross Oct 2014 A1
20140325643 Bart et al. Oct 2014 A1
20140331119 Dixon et al. Nov 2014 A1
20140333651 Cervelli et al. Nov 2014 A1
20140337772 Cervelli et al. Nov 2014 A1
20140344230 Krause et al. Nov 2014 A1
20140366132 Stiansen et al. Dec 2014 A1
20140379812 Bastide et al. Dec 2014 A1
20150019394 Unser et al. Jan 2015 A1
20150039565 Lucas Feb 2015 A1
20150046791 Isaacson Feb 2015 A1
20150046844 Lee et al. Feb 2015 A1
20150046845 Lee et al. Feb 2015 A1
20150046870 Goldenberg et al. Feb 2015 A1
20150046876 Goldenberg Feb 2015 A1
20150067533 Volach Mar 2015 A1
20150089424 Duffield et al. Mar 2015 A1
20150100897 Sun et al. Apr 2015 A1
20150100907 Erenrich et al. Apr 2015 A1
20150106379 Elliot et al. Apr 2015 A1
20150128274 Giokas May 2015 A1
20150134666 Gattiker et al. May 2015 A1
20150169709 Kara et al. Jun 2015 A1
20150169726 Kara et al. Jun 2015 A1
20150170077 Kara et al. Jun 2015 A1
20150178825 Huerta Jun 2015 A1
20150178877 Bogomolov et al. Jun 2015 A1
20150186821 Wang et al. Jul 2015 A1
20150187036 Wang et al. Jul 2015 A1
20150188715 Castelluci et al. Jul 2015 A1
20150207809 Macaulay Jul 2015 A1
20150223158 McCann et al. Aug 2015 A1
20150227295 Meiklejohn et al. Aug 2015 A1
20150229664 Hawthorn et al. Aug 2015 A1
20150235334 Wang et al. Aug 2015 A1
20150248563 Alfarano et al. Sep 2015 A1
20150256498 Snider et al. Sep 2015 A1
20150261847 Ducott et al. Sep 2015 A1
20150309719 Ma et al. Oct 2015 A1
20150317342 Grossman et al. Nov 2015 A1
20150324868 Kaftan et al. Nov 2015 A1
20150326601 Grondin et al. Nov 2015 A1
20150347558 Blaas et al. Dec 2015 A1
20160004764 Chakerian et al. Jan 2016 A1
20160004864 Falk et al. Jan 2016 A1
20160028759 Visbal Jan 2016 A1
20160034470 Sprague et al. Feb 2016 A1
20160048937 Mathura et al. Feb 2016 A1
20170041335 Spiro et al. Feb 2017 A1
20170134397 Dennison et al. May 2017 A1
20170187739 Spiro et al. Jun 2017 A1
Foreign Referenced Citations (64)
Number Date Country
101729531 Jun 2010 CN
103281301 Sep 2013 CN
102054015 May 2014 CN
102014103482 Sep 2014 DE
102014204827 Sep 2014 DE
102014204830 Sep 2014 DE
102014204834 Sep 2014 DE
102014215621 Feb 2015 DE
1191463 Mar 2002 EP
1672527 Jun 2006 EP
1962222 Aug 2008 EP
2551799 Jan 2013 EP
2555153 Feb 2013 EP
2560134 Feb 2013 EP
2778977 Sep 2014 EP
2778983 Sep 2014 EP
2779082 Sep 2014 EP
2835745 Feb 2015 EP
2835770 Feb 2015 EP
2838039 Feb 2015 EP
2846241 Mar 2015 EP
2851852 Mar 2015 EP
2858014 Apr 2015 EP
2858018 Apr 2015 EP
2863326 Apr 2015 EP
2863346 Apr 2015 EP
2869211 May 2015 EP
2881868 Jun 2015 EP
2884439 Jun 2015 EP
2884440 Jun 2015 EP
2891992 Jul 2015 EP
2892197 Jul 2015 EP
2897051 Jul 2015 EP
2911078 Aug 2015 EP
2911100 Aug 2015 EP
2940603 Nov 2015 EP
2940609 Nov 2015 EP
2963577 Jan 2016 EP
2963578 Jan 2016 EP
2985729 Feb 2016 EP
2985974 Feb 2016 EP
3018879 May 2016 EP
2513247 Oct 2014 GB
2516155 Jan 2015 GB
2518745 Apr 2015 GB
2012778 Nov 2014 NL
2013306 Feb 2015 NL
2011642 Aug 2015 NL
624557 Dec 2014 NZ
WO 2000009529 Feb 2000 WO
WO 2002065353 Aug 2002 WO
WO 2005010685 Feb 2005 WO
WO 2005104736 Nov 2005 WO
WO 2005116851 Dec 2005 WO
WO-2005116851 Dec 2005 WO
WO 2008011728 Jan 2008 WO
WO 2008064207 May 2008 WO
WO 2008113059 Sep 2008 WO
WO 2009061501 May 2009 WO
WO 2010000014 Jan 2010 WO
WO 2010030913 Mar 2010 WO
WO 2013010157 Jan 2013 WO
WO 2013102892 Jul 2013 WO
WO 2013126281 Aug 2013 WO
Non-Patent Literature Citations (337)
Entry
US 8,712,906, 04/2014, Sprague et al. (withdrawn)
US 8,725,631, 05/2014, Sprague et al. (withdrawn)
“A First Look: Predicting Market Demand for Food Retail using a Huff Analysis,” TRF Policy Solutions, Jul. 2012, pp. 30.
“A Quick Guide to UniProtKB Swiss-Prot & TrEMBL,” Sep. 2011, pp. 2.
“A Word About Banks and the Laundering of Drug Money,” Aug. 18, 2012, http://www.golemxiv.co.uk/2012/08/a-word-about-banks-and-the-laundering-of-drug-money/.
“HunchLab: Heat Map and Kernel Density Calculation for Crime Analysis,” Azavea Journal, printed from www.azavea.com/blogs/newsletter/v4i4/kernel-density-capabilities-added-to-hunchlab/ on Sep. 9, 2014, 2 pages.
“Money Laundering Risks and E-Gaming: A European Overview and Assessment,” 2009, http://www.cf.ac.uk/socsi/resources/Levi_Final_Money_Laundering _Risks_egaming.pdf.
“Potential Money Laundering Warning Signs,” snapshot taken 2003, https://web.archive.org/web/20030816090055/http:/finsolinc.com/ANTI-MONEY%20LAUNDERING%20TRAINING%20GUIDES.pdf.
“Refresh CSS Ellipsis When Resizing Container—Stack Overflow,” Jul. 31, 2013, retrieved from internet http://stackoverflow.com/questions/17964681/refresh-css-ellipsis-when-resizing-container, retrieved on May 18, 2015.
“The FASTA Program Package,” fasta-36.3.4, Mar. 25, 2011, pp. 29.
“Using Whois Based Geolocation and Google Maps API for Support Cybercrime Investigations,” http://wseas.us/e-library/conferences/2013/Dubrovnik/TELECIRC/TELECIRC-32.pdf.
About 80 Minutes, “Palantir in a Number of Parts—Part 6—Graph,” Mar. 21, 2013, pp. 1-6.
Acklen, Laura, “Absolute Beginner's Guide to Microsoft Word 2003,” Dec. 24, 2003, pp. 15-18, 34-41, 308-316.
Alfred, Rayner “Summarizing Relational Data Using Semi-Supervised Genetic Algorithm-Based Clustering Techniques”, Journal of Computer Science, 2010, vol. 6, No. 7, pp. 775-784.
Alur et al., “Chapter 2: IBM InfoSphere DataStage Stages,” IBM InfoSphere DataStage Data Flow and Job Design, Jul. 1, 2008, pp. 35-137.
Amnet, “5 Great Tools for Visualizing Your Twitter Followers,” posted Aug. 4, 2010, http://www.amnetblog.com/component/content/article/115-5-grate-tools-for-visualizing-your-twitter-followers.html.
Ananiev et al., “The New Modality API,” http://web.archive.org/web/20061211011958/http://java.sun.com/developer/technicalArticles/J2SE/Desktop/javase6/modality/ Jan. 21, 2006, pp. 8.
Appacts, “Smart Thinking for Super Apps,” http://www.appacts.com Printed Jul. 18, 2013 in 4 pages.
Apsalar, “Data Powered Mobile Advertising,” “Free Mobile App Analytics” and various analytics related screen shots http://apsalar.com Printed Jul. 18, 2013 in 8 pages.
Bhosale, Safal V., “Holy Grail of Outlier Detection Technique: A Macro Level Take on the State of the Art,” International Journal of Computer Science & Information Technology, Aug. 1, 2014, retrieved from http://www.ijcsit.com/docs/Volume5/vol5issue04/ijcsit20140504226.pdf retrieved May 3, 2016.
Bluttman et al., “Excel Formulas and Functions for Dummies,” 2005, Wiley Publishing, Inc., pp. 280, 284-286.
Boyce, Jim, “Microsoft Outlook 2010 Inside Out,” Aug. 1, 2010, retrieved from the internet https://capdtron.files.wordpress.com/2013/01/outlook-2010-inside_out.pdf.
Bugzilla@Mozilla, “Bug 18726—[feature] Long-click means of invoking contextual menus not supported,” http://bugzilla.mozilla.org/show_bug.cgi?id=18726 printed Jun. 13, 2013 in 11 pages.
Canese et al., “Chapter 2: PubMed: The Bibliographic Database,” The NCBI Handbook, Oct. 2002, pp. 1-10.
Capptain—Pilot Your Apps, http://www.capptain.com Printed Jul. 18, 2013 in 6 pages.
Celik, Tantek, “CSS Basic User Interface Module Level 3 (CSS3 UI),” Section 8 Resizing and Overflow, Jan. 17, 2012, retrieved from internet http://www.w3.org/TR/2012/WD-css3-ui-20120117/#resizing-amp-overflow retrieved on May 18, 2015.
Chen et al., “Bringing Order to the Web: Automatically Categorizing Search Results,” CHI 2000, Proceedings of the SIGCHI conference on Human Factors in Computing Systems, Apr. 1-6, 2000, The Hague, The Netherlands, pp. 145-152.
Chung, Chin-Wan, “Dataplex: An Access to Heterogeneous Distributed Databases”, Communications of the ACM, Association for Computing Machinery, Inc., vol. 33, Issue No. 1, pp. 70-80, Jan. 1, 1990.
Conner, Nancy, “Google Apps: The Missing Manual,” May 1, 2008, pp. 15.
Countly Mobile Analytics, http://count.ly/ Printed Jul. 18, 2013 in 9 pages.
Definition “Identify”, downloaded Jan. 22, 2015, 1 page.
Definition “Overlay”, downloaded Jan. 22, 2015, 1 page.
Delcher et al., “Identifying Bacterial Genes and Endosymbiont DNA with Glimmer,” BioInformatics, vol. 23, No. 6, 2007, pp. 673-679.
Distimo—App Analytics, http://www.distimo.com/app-analytics Printed Jul. 18, 2013 in 5 pages.
Dramowicz, Ela, “Retail Trade Area Analysis Using the Huff Model,” Directions Magazine, Jul. 2, 2005 in 10 pages, http://www.directionsmag.com/articles/retail-trade-area-analysis-using-the-huff-model/123411.
Flurry Analytics, http://www.flurry.com/ Printed Jul. 18, 2013 in 14 pages.
Gesher, Ari, “Palantir Screenshots in the Wild: Swing Sightings,” The Palantir Blog, Sep. 11, 2007, pp. 1-12.
GIS-NET 3 Public_Department of Regional Planning. Planning & Zoning Information for Unincorporated LA County. Retrieved Oct. 2, 2013 from http://gis.planning.lacounty.gov/GIS-NET3_Public/Viewer.html.
Golmohammadi et al., “Data Mining Applications for Fraud Detection in Securities Market,” Intelligence and Security Informatics Conference (EISIC), 2012 European, IEEE, Aug. 22, 2012, pp. 107-114.
Google Analytics Official Website—Web Analytics & Reporting, http://www.google.com/analytics.index.html Printed Jul. 18, 2013 in 22 pages.
Gorr et al., “Crime Hot Spot Forecasting: Modeling and Comparative Evaluation”, Grant 98-IJ-CX-K005, May 6, 2002, 37 pages.
Goswami, Gautam, “Quite Writly Said!,” One Brick at a Time, Aug. 21, 2005, pp. 7.
Griffith, Daniel A., “A Generalized Huff Model,” Geographical Analysis, Apr. 1982, vol. 14, No. 2, pp. 135-144.
Gu et al., “Record Linkage: Current Practice and Future Directions,” Jan. 15, 2004, pp. 32.
Gu et al., “BotMiner: Clustering Analysis of Network Traffice for Protocol-and-Structure-Independent Botnet Detection,” USENIX Security Symposium, 2008, 17 pages.
Hansen et al., “Analyzing Social Media Networks with NodeXL: Insights from a Connected World”, Chapter 4, pp. 53-67 and Chapter 10, pp. 143-164, published Sep. 2010.
Hardesty, “Privacy Challenges: Analysis: It's Surprisingly Easy to Identify Individuals from Credit-Card Metadata,” MIT News on Campus and Around the World, MIT News Office, Jan. 29, 2015, 3 pages.
Hibbert et al., “Prediction of Shopping Behavior Using a Huff Model Within a GIS Framework,” Healthy Eating in Context, Mar. 18, 2011, pp. 16.
Hodge et al., “A Survey of Outlier Detection Methodologies,” Artificial Intelligence Review, vol. 22, No. 2, Oct. 1, 2004.
Hogue et al., “Thresher: Automating the Unwrapping of Semantic Content from the World Wide Web,” 14th International Conference on World Wide Web, WWW 2005: Chiba, Japan, May 10-14, 2005, pp. 86-95.
Hua et al., “A Multi-attribute Data Structure with Parallel Bloom Filters for Network Services”, HiPC 2006, LNCS 4297, pp. 277-288, 2006.
Huang et al., “Systematic and Integrative Analysis of Large Gene Lists Using DAVID Bioinformatics Resources,” Nature Protocols, 4.1, 2008, 44-57.
Huff et al., “Calibrating the Huff Model Using ArcGIS Business Analyst,” ESRI, Sep. 2008, pp. 33.
Huff, David L., “Parameter Estimation in the Huff Model,” ESRI, ArcUser, Oct.-Dec. 2003, pp. 34-36.
Kahan et al., “Annotea: an Open RDF Infrastructure for Shared Web Annotations”, Computer Networks, Elsevier Science Publishers B.V., vol. 39, No. 5, dated Aug. 5, 2002, pp. 589-608.
Keylines.com, “An Introduction to KeyLines and Network Visualization,” Mar. 2014, http://keylines.com/wp-content/uploads/2014/03/KeyLines-White-Paper.pdf downloaded May 12, 2014 in 8 pages.
Keylines.com, “KeyLines Datasheet,” Mar. 2014, http://keylines.com/wp-content/uploads/2014/03/KeyLines-datasheet.pdf downloaded May 12, 2014 in 2 pages.
Keylines.com, “Visualizing Threats: Improved Cyber Security Through Network Visualization,” Apr. 2014, http://keylines.com/wp-content/uploads/2014/04/Visualizing-Threats1.pdf downloaded May 12, 2014 in 10 pages.
Kitts, Paul, “Chapter 14: Genome Assembly and Annotation Process,” The NCBI Handbook, Oct. 2002, pp. 1-21.
Kontagent Mobile Analytics, http://www.kontagent.com/ Printed Jul. 18, 2013 in 9 pages.
Li et al., “Interactive Multimodal Visual Search on Mobile Device,” IEEE Transactions on Multimedia, vol. 15, No. 3, Apr. 1, 2013, pp. 594-607.
Li et al., “Identifying the Signs of Fraudulent Accounts using Data Mining Techniques,” Computers in Human Behavior, vol. 28, No. 3, Jan. 16, 2012.
Liu, Tianshun, “Combining GIS and the Huff Model to Analyze Suitable Locations for a New Asian Supermarket in the Minneapolis and St. Paul, Minnesota USA,” Papers in Resource Analysis, 2012, vol. 14, pp. 8.
Localytics—Mobile App Marketing & Analytics, http://www.localytics.com/ Printed Jul. 18, 2013 in 12 pages.
Madden, Tom, “Chapter 16: The BLAST Sequence Analysis Tool,” The NCBI Handbook, Oct. 2002, pp. 1-15.
Manno et al., “Introducing Collaboration in Single-user Applications through the Centralized Control Architecture,” 2010, pp. 10.
Manske, “File Saving Dialogs,” http://www.mozilla.org/editor/ui_specs/FileSaveDialogs.html, Jan. 20, 1999, pp. 7.
Map Builder, “Rapid Mashup Development Tool for Google and Yahoo Maps!” http://web.archive.org/web/20090626224734/http://www.mapbuilder.net/ printed Jul. 20, 2012 in 2 pages.
Map of San Jose, CA. Retrieved Oct. 2, 2013 from http://maps.yahoo.com.
Map of San Jose, CA. Retrieved Oct. 2, 2013 from http://maps.bing.com.
Map of San Jose, CA. Retrieved Oct. 2, 2013 from http://maps.google.com.
Microsoft—Developer Network, “Getting Started with VBA in Word 2010,” Apr. 2010, http://msdn.microsoft.com/en-us/library/ff604039%28v=office.14%29.aspx as printed Apr. 4, 2014 in 17 pages.
Microsoft Office—Visio, “About connecting shapes,” http://office.microsoft.com/en-us/visio-help/about-connecting-shapes-HP085050369.aspx printed Aug. 4, 2011 in 6 pages.
Microsoft Office—Visio, “Add and glue connectors with the Connector tool,” http://office.microsoft.com/en-us/visio-help/add-and-glue-connectors-with-the-connector-tool-HA010048532.aspx?CTT=1 printed Aug. 4, 2011 in 1 page.
Mixpanel—Mobile Analytics, https://mixpanel.com/ Printed Jul. 18, 2013 in 13 pages.
Mizrachi, Ilene, “Chapter 1: GenBank: The Nuckeotide Sequence Database,” The NCBI Handbook, Oct. 2002, pp. 1-14.
Ngai et al., “The Application of Data Mining Techniques in Financial Fraud Detection: A Classification Frameworok and an Academic Review of Literature,” Decision Support Systems, Elsevier Science Publishers, Amsterdam, Netherlands, vol. 50, No. 3, Feb. 1, 2011.
Nierman, “Evaluating Structural Similarity in XML Documents”, 6 pages, 2002.
Nolan et al., “MCARTA: A Malicious Code Automated Run-Time Analysis Framework,” Homeland Security (HST) 2012 IEEE Conference on Technologies for, Nov. 13, 2012, pp. 13-17.
Olanoff, “Deep Dive with the New Google Maps for Desktop with Google Earth Integration, It's More than Just a Utility,” May 15, 2013, pp. 1-6, retrieved from the internet: http://web.archive.org/web/20130515230641/http://techcrunch.com/2013/05/15/deep-dive-with-the-new-google-maps-for-desktop-with-google-earth-integration-its-more-than-just-a-utility/.
Open Web Analytics (OWA), http://www.openwebanalytics.com/ Printed Jul. 19, 2013 in 5 pages.
Palantir Technologies, “Palantir Labs_Timeline,” Oct. 1, 2010, retrieved from the internet https://www.youtube.com/watch?v=JCgDW5bru9M.
Palmas et al., “An Edge-Bunding Layout for Interactive Parallel Coordinates” 2014 IEEE Pacific Visualization Symposium, pp. 57-64.
Piwik—Free Web Analytics Software. http://piwik.org/ Printed Jul. 19, 2013 in18 pages.
Quartert FS “Managing Business Performance and Detecting Outliers in Financial Services,” Oct. 16, 2014, retrieved from https://quartetfs.com/images/pdf/white-papers/Quartet_FS_White_Paper_-_ActivePivot_Sentinel.pdf retrieved on May 3, 2016.
Quartert FS “Resource Center,” Oct. 16, 2014, retrieved from https://web.archive.org/web/20141016044306/http://quartetfs.com/resource-center/white-papers retrieved May 3, 2016.
Quest, “Toad for ORACLE 11.6—Guide to Using Toad,” Sep. 24, 2012, pp. 1-162.
Rouse, Margaret, “OLAP Cube,” http://searchdatamanagement.techtarget.com/definition/OLAP-cube, Apr. 28, 2012, pp. 16.
Shah, Chintan, “Periodic Connections to Control Server Offer New Way to Detect Botnets,” Oct. 24, 2013 in 6 pages, http://www.blogs.mcafee.com/mcafee-labs/periodic-links-to-control-server-offer-new-way-to-detect-botnets.
Shi et al., “A Scalable Implementation of Malware Detection Based on Network Connection Behaviors,” 2013 International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery, IEEE, Oct. 10, 2013, pp. 59-66.
Sigrist, et al., “PROSITE, a Protein Domain Database for Functional Characterization and Annotation,” Nucleic Acids Research, 2010, vol. 38, pp. D161-D166.
Sirotkin et al., “Chapter 13: The Processing of Biological Sequence Data at NCBI,” The NCBI Handbook, Oct. 2002, pp. 1-11.
StatCounter—Free Invisible Web Tracker, Hit Counter and Web Stats, http://statcounter.com/ Printed Jul. 19, 2013 in 17 pages.
Symantec Corporation, “E-Security Begins with Sound Security Policies,” Announcement Symantec, Jun. 14, 2001.
TestFlight—Beta Testing on the Fly, http://testflightapp.com/ Printed Jul. 18, 2013 in 3 pages.
Thompson, Mick, “Getting Started with GEO,” Getting Started with GEO, Jul. 26, 2011.
trak.io, http://trak.io/ printed Jul. 18, 2013 in 3 pages.
Umagandhi et al., “Search Query Recommendations Using Hybrid User Profile with Query Logs,” International Journal of Computer Applications, vol. 80, No. 10, Oct. 1, 2013, pp. 7-18.
UserMetrix, http://usermetrix.com/android-analytics printed Jul. 18, 2013 in 3 pages.
Valentini et al., “Ensembles of Learning Machines”, M. Marinaro and R. Tagliaferri (Eds.): WIRN VIETRI 2002, LNCS 2486, pp. 3-20.
Vose et al., “Help File for ModelRisk Version 5,” 2007, Vose Software, pp. 349-353. [Uploaded in 2 Parts].
Wiggerts, T.A., “Using Clustering Algorithms in Legacy Systems Remodularization,” Reverse Engineering, Proceedings of the Fourth Working Conference, Netherlands, Oct. 6-8, 1997, IEEE Computer Soc., pp. 33-43.
Wikipedia, “Federated Database System,” Sep. 7, 2013, retrieved from the internet on Jan. 27, 2015 http://en.wikipedia.org/w/index.php?title=Federated_database_system&oldid=571954221.
Wright et al., “Palantir Technologies VAST 2010 Challenge Text Records _ Investigations into Arms Dealing,” Oct. 29, 2010, pp. 1-10.
Yang et al., “HTML Page Analysis Based on Visual Cues”, A129, pp. 859-864, 2001.
International Search Report and Written Opinion in Application No. PCT/US2009/056703, dated Mar. 15, 2010.
Notice of Acceptance for Australian Patent Application No. 2014250678 dated Oct. 7, 2015.
Notice of Allowance for U.S. Appl. No. 12/556,318 dated Nov. 2, 2015.
Notice of Allowance for U.S. Appl. No. 14/102,394 dated Aug. 25, 2014.
Notice of Allowance for U.S. Appl. No. 14/108,187 dated Aug. 29, 2014.
Notice of Allowance for U.S. Appl. No. 14/135,289 dated Oct. 14, 2014.
Notice of Allowance for U.S. Appl. No. 14/139,628 dated Jun. 24, 2015.
Notice of Allowance for U.S. Appl. No. 14/139,640 dated Jun. 17, 2015.
Notice of Allowance for U.S. Appl. No. 14/139,713 dated Jun. 12, 2015.
Notice of Allowance for U.S. Appl. No. 14/148,568 dated Aug. 26, 2015.
Notice of Allowance for U.S. Appl. No. 14/192,767 dated Dec. 16, 2014.
Notice of Allowance for U.S. Appl. No. 14/225,084 dated May 4, 2015.
Notice of Allowance for U.S. Appl. No. 14/264,445 dated May 14, 2015.
Notice of Allowance for U.S. Appl. No. 14/268,964 dated Dec. 3, 2014.
Notice of Allowance for U.S. Appl. No. 14/278,963 dated Sep. 2, 2015.
Notice of Allowance for U.S. Appl. No. 14/294,098 dated Dec. 29, 2014.
Notice of Allowance for U.S. Appl. No. 14/323,935 dated Oct. 1, 2015.
Notice of Allowance for U.S. Appl. No. 14/326,738 dated Nov. 18, 2015.
Notice of Allowance for U.S. Appl. No. 14/473,552 dated Jul. 24, 2015.
Notice of Allowance for U.S. Appl. No. 14/473,860 dated Feb. 27, 2015.
Notice of Allowance for U.S. Appl. No. 14/479,863 dated Mar. 31, 2015.
Notice of Allowance for U.S. Appl. No. 14/486,991 dated May 1, 2015.
Notice of Allowance for U.S. Appl. No. 14/504,103 dated May 18, 2015.
Notice of Allowance for U.S. Appl. No. 14/579,752 dated Apr. 4, 2016.
Notice of Allowance for U.S. Appl. No. 14/616,080 dated Apr. 2, 2015.
Notice of Allowance for U.S. Appl. No. 14/698,432 dated Sep. 28, 2016.
Notice of Allowance for U.S. Appl. No. 14/816,748 dated Oct. 19, 2016.
Notice of Allowance for U.S. Appl. No. 15/072,174 dated Jul. 13, 2016.
Official Communication for Australian Patent Application No. 2014201511 dated Feb. 27, 2015.
Official Communication for Australian Patent Application No. 2014202442 dated Mar. 19, 2015.
Official Communication for Australian Patent Application No. 2014210604 dated Jun. 5, 2015.
Official Communication for Australian Patent Application No. 2014210614 dated Jun. 5, 2015.
Official Communication for Australian Patent Application No. 2014213553 dated May 7, 2015.
Official Communication for Australian Patent Application No. 2014250678 dated Jun. 17, 2015.
Official Communication for European Patent Application No. 14158861.6 dated Jun. 16, 2014.
Official Communication for European Patent Application No. 14159464.8 dated Jul. 31, 2014.
Official Communication for European Patent Application No. 14159535.5 dated May 22, 2014.
Official Communication for European Patent Application No. 14180142.3 dated Feb. 6, 2015.
Official Communication for European Patent Application No. 14180281.9 dated Jan. 26, 2015.
Official Communication for European Patent Application No. 14180321.3 dated Apr. 17, 2015.
Official Communication for European Patent Application No. 14180432.8 dated Jun. 23, 2015.
Official Communication for European Patent Application No. 14186225.0 dated Feb. 13, 2015.
Official Communication for European Patent Application No. 14187739.9 dated Jul. 6, 2015.
Official Communication for European Patent Application No. 14187996.5 dated Feb. 12, 2015.
Official Communication for European Patent Application No. 14189344.6 dated Feb. 20, 2015.
Official Communication for European Patent Application No. 14189347.9 dated Mar. 4, 2015.
Official Communication for European Patent Application No. 14189802.3 dated May 11, 2015.
Official Communication for European Patent Application No. 14191540.5 dated May 27, 2015.
Official Communication for European Patent Application No. 14197879.1 dated Apr. 28, 2015.
Official Communication for European Patent Application No. 14197895.7 dated Apr. 28, 2015.
Official Communication for European Patent Application No. 14197938.5 dated Apr. 28, 2015.
Official Communication for European Patent Application No. 14199182.8 dated Mar. 13, 2015.
Official Communication for European Patent Application No. 15155845.9 dated Oct. 6, 2015.
Official Communication for European Patent Application No. 15155846.7 dated Jul. 8, 2015.
Official Communication for European Patent Application No. 15156004.2 dated Aug. 24, 2015.
Official Communication for European Patent Application No. 15165244.3 dated Aug. 27, 2015.
Official Communication for European Patent Application No. 15175151.8 dated Nov. 25, 2015.
Official Communication for European Patent Application No. 15180515.7 dated Dec. 14, 2015.
Official Communication for European Patent Application No. 15183721.8 dated Nov. 23, 2015.
Official Communication for European Patent Application No. 15193287.8 dated Apr. 1, 2016.
Official Communication for European Patent Application No. 15201727.3 dated May 23, 2016.
Official Communication for European Patent Application No. 15202090.5 dated May 13, 2016.
Official Communication for European Patent Application No. 16183052.6 dated Dec. 12, 2016.
Official Communication for Great Britain Patent Application No. 1404457.2 dated Aug. 14, 2014.
Official Communication for Great Britain Patent Application No. 1404486.1 dated May 21, 2015.
Official Communication for Great Britain Patent Application No. 1404486.1 dated Aug. 27, 2014.
Official Communication for Great Britain Patent Application No. 1404489.5 dated May 21, 2015.
Official Communication for Great Britain Patent Application No. 1404489.5 dated Aug. 27, 2014.
Official Communication for Great Britain Patent Application No. 1404489.5 dated Oct. 6, 2014.
Official Communication for Great Britain Patent Application No. 1404499.4 dated Aug. 20, 2014.
Official Communication for Great Britain Patent Application No. 1404574.4 dated Dec. 18, 2014.
Official Communication for Great Britain Patent Application No. 1411984.6 dated Dec. 22, 2014.
Official Communication for Great Britain Patent Application No. 1413935.6 dated Jan. 27, 2015.
Official Communication for Netherlands Patent Application No. 2012433 dated Mar. 11, 2016.
Official Communication for Netherlands Patent Application No. 2012437 dated Sep. 18, 2015.
Official Communication for Netherlands Patent Application No. 2013306 dated Apr. 24, 2015.
Official Communication for New Zealand Patent Application No. 622181 dated Mar. 24, 2014.
Official Communication for New Zealand Patent Application No. 622439 dated Mar. 24, 2014.
Official Communication for New Zealand Patent Application No. 622439 dated Jun. 6, 2014.
Official Communication for New Zealand Patent Application No. 622473 dated Jun. 19, 2014.
Official Communication for New Zealand Patent Application No. 622473 dated Mar. 27, 2014.
Official Communication for New Zealand Patent Application No. 622513 dated Apr. 3, 2014.
Official Communication for New Zealand Patent Application No. 622517 dated Apr. 3, 2014.
Official Communication for New Zealand Patent Application No. 624557 dated May 14, 2014.
Official Communication for New Zealand Patent Application No. 627061 dated Jul. 14, 2014.
Official Communication for New Zealand Patent Application No. 627962 dated Aug. 5, 2014.
Official Communication for New Zealand Patent Application No. 628150 dated Aug. 15, 2014.
Official Communication for New Zealand Patent Application No. 628161 dated Aug. 25, 2014.
Official Communication for New Zealand Patent Application No. 628263 dated Aug. 12, 2014.
Official Communication for New Zealand Patent Application No. 628495 dated Aug. 19, 2014.
Official Communication for New Zealand Patent Application No. 628585 dated Aug. 26, 2014.
Official Communication for New Zealand Patent Application No. 628840 dated Aug. 28, 2014.
Official Communication for U.S. Appl. No. 12/556,318 dated Jul. 2, 2015.
Official Communication for U.S. Appl. No. 13/247,987 dated Apr. 2, 2015.
Official Communication for U.S. Appl. No. 13/247,987 dated Sep. 22, 2015.
Official Communication for U.S. Appl. No. 13/827,491 dated Dec. 1, 2014.
Official Communication for U.S. Appl. No. 13/831,791 dated Mar. 4, 2015.
Official Communication for U.S. Appl. No. 13/831,791 dated Aug. 6, 2015.
Official Communication for U.S. Appl. No. 13/835,688 dated Jun. 17, 2015.
Official Communication for U.S. Appl. No. 13/839,026 dated Aug. 4, 2015.
Official Communication for U.S. Appl. No. 14/134,558 dated Oct. 7, 2015.
Official Communication for U.S. Appl. No. 14/139,628 dated Jan. 5, 2015.
Official Communication for U.S. Appl. No. 14/139,640 dated Dec. 15, 2014.
Official Communication for U.S. Appl. No. 14/139,713 dated Dec. 15, 2014.
Official Communication for U.S. Appl. No. 14/148,568 dated Oct. 22, 2014.
Official Communication for U.S. Appl. No. 14/148,568 dated Mar. 26, 2015.
Official Communication for U.S. Appl. No. 14/196,814 dated May 5, 2015.
Official Communication for U.S. Appl. No. 14/225,006 dated Sep. 10, 2014.
Official Communication for U.S. Appl. No. 14/225,006 dated Sep. 2, 2015.
Official Communication for U.S. Appl. No. 14/225,006 dated Feb. 27, 2015.
Official Communication for U.S. Appl. No. 14/225,084 dated Sep. 11, 2015.
Official Communication for U.S. Appl. No. 14/225,084 dated Sep. 2, 2014.
Official Communication for U.S. Appl. No. 14/225,084 dated Feb. 20, 2015.
Official Communication for U.S. Appl. No. 14/225,160 dated Feb. 11, 2015.
Official Communication for U.S. Appl. No. 14/225,160 dated Aug. 12, 2015.
Official Communication for U.S. Appl. No. 14/225,160 dated May 20, 2015.
Official Communication for U.S. Appl. No. 14/225,160 dated Oct. 22, 2014.
Official Communication for U.S. Appl. No. 14/225,160 dated Jul. 29, 2014.
Official Communication for U.S. Appl. No. 14/251,485 dated Oct. 1, 2015.
Official Communication for U.S. Appl. No. 14/264,445 dated Apr. 17, 2015.
Official Communication for U.S. Appl. No. 14/268,964 dated Sep. 3, 2014.
Official Communication for U.S. Appl. No. 14/278,963 dated Jan. 30, 2015.
Official Communication for U.S. Appl. No. 14/289,596 dated Jul. 18, 2014.
Official Communication for U.S. Appl. No. 14/289,596 dated Jan. 26, 2015.
Official Communication for U.S. Appl. No. 14/289,596 dated Apr. 30, 2015.
Official Communication for U.S. Appl. No. 14/289,599 dated Jul. 22, 2014.
Official Communication for U.S. Appl. No. 14/289,599 dated May 29, 2015.
Official Communication for U.S. Appl. No. 14/289,599 dated Sep. 4, 2015.
Official Communication for U.S. Appl. No. 14/294,098 dated Aug. 15, 2014.
Official Communication for U.S. Appl. No. 14/294,098 dated Nov. 6. 2014.
Official Communication for U.S. Appl. No. 14/306,138 dated Sep. 14, 2015.
Official Communication for U.S. Appl. No. 14/306,138 dated Feb. 18, 2015.
Official Communication for U.S. Appl. No. 14/306,138 dated Sep. 23, 2014.
Official Communication for U.S. Appl. No. 14/306,138 dated May 26, 2015.
Official Communication for U.S. Appl. No. 14/306,138 dated Dec. 3, 2015.
Official Communication for U.S. Appl. No. 14/306,147 dated Feb. 19, 2015.
Official Communication for U.S. Appl. No. 14/306,147 dated Aug. 7, 2015.
Official Communication for U.S. Appl. No. 14/306,147 dated Sep. 9, 2014.
Official Communication for U.S. Appl. No. 14/306,154 dated Mar. 11, 2015.
Official Communication for U.S. Appl. No. 14/306,154 dated May 15, 2015.
Official Communication for U.S. Appl. No. 14/306,154 dated Nov. 16, 2015.
Official Communication for U.S. Appl. No. 14/306,154 dated Jul. 6, 2015.
Official Communication for U.S. Appl. No. 14/306,154 dated Sep. 9, 2014.
Official Communication for U.S. Appl. No. 14/319,161 dated Jan. 23, 2015.
Official Communication for U.S. Appl. No. 14/319,765 dated Sep. 10, 2015.
Official Communication for U.S. Appl. No. 14/319,765 dated Jun. 16, 2015.
Official Communication for U.S. Appl. No. 14/319,765 dated Nov. 25, 2014.
Official Communication for U.S. Appl. No. 14/319,765 dated Feb. 4, 2015.
Official Communication for U.S. Appl. No. 14/323,935 dated Jun. 22, 2015.
Official Communication for U.S. Appl. No. 14/323,935 dated Nov. 28, 2014.
Official Communication for U.S. Appl. No. 14/323,935 dated Mar. 31, 2015.
Official Communication for U.S. Appl. No. 14/326,738 dated Dec. 2, 2014.
Official Communication for U.S. Appl. No. 14/326,738 dated Jul. 31, 2015.
Official Communication for U.S. Appl. No. 14/326,738 dated Mar. 31, 2015.
Official Communication for U.S. Appl. No. 14/451,221 dated Oct. 21, 2014.
Official Communication for U.S. Appl. No. 14/463,615 dated Nov. 13, 2014.
Official Communication for U.S. Appl. No. 14/463,615 dated May 21, 2015.
Official Communication for U.S. Appl. No. 14/463,615 dated Jan. 28, 2015.
Official Communication for U.S. Appl. No. 14/473,552 dated Feb. 24, 2015.
Official Communication for U.S. Appl. No. 14/473,860 dated Nov. 4, 2014.
Official Communication for U.S. Appl. No. 14/483,527 dated Jan. 28, 2015.
Official Communication for U.S. Appl. No. 14/486,991 dated Mar. 10, 2015.
Official Communication for U.S. Appl. No. 14/490,612 dated Aug. 18, 2015.
Official Communication for U.S. Appl. No. 14/504,103 dated Feb. 5, 2015.
Official Communication for U.S. Appl. No. 14/518,757 dated Dec. 1, 2015.
Official Communication for U.S. Appl. No. 14/518,757 dated Apr. 2, 2015.
Official Communication for U.S. Appl. No. 14/518,757 dated Jul. 20, 2015.
Official Communication for U.S. Appl. No. 14/571,098 dated Mar. 11, 2015.
Official Communication for U.S. Appl. No. 14/579,752 dated Aug. 19, 2015.
Official Communication for U.S. Appl. No. 14/579,752 dated May 26, 2015.
Official Communication for U.S. Appl. No. 14/579,752 dated Dec. 9, 2015.
Official Communication for U.S. Appl. No. 14/581,920 dated Mar. 1, 2016.
Official Communication for U.S. Appl. No. 14/581,920 dated Jun. 13, 2016.
Official Communication for U.S. Appl. No. 14/581,920 dated May 3, 2016.
Official Communication for U.S. Appl. No. 14/631,633 dated Sep. 10, 2015.
Official Communication for U.S. Appl. No. 14/639,606 dated Oct. 16, 2015.
Official Communication for U.S. Appl. No. 14/639,606 dated May 18, 2015.
Official Communication for U.S. Appl. No. 14/639,606 dated Jul. 24, 2015.
Official Communication for U.S. Appl. No. 14/639,606 dated Apr. 5, 2016.
Official Communication for U.S. Appl. No. 14/698,432 dated Jun. 3, 2016.
Official Communication for U.S. Appl. No. 14/726,353 dated Mar. 1, 2016.
Official Communication for U.S. Appl. No. 14/726,353 dated Sep. 10, 2015.
Official Communication for U.S. Appl. No. 14/813,749 dated Sep. 28, 2015.
Official Communication for U.S. Appl. No. 14/816,748 dated Apr. 1, 2016.
Official Communication for U.S. Appl. No. 14/816,748 dated May 24, 2016.
Official Communication for U.S. Appl. No. 14/857,071 dated Mar. 2, 2016.
Official Communication for U.S. Appl. No. 15/072,174 dated Jun. 1, 2016.
Official Communication for U.S. Appl. No. 15/253,717 dated Dec. 1, 2016.
Restriction Requirement for U.S. Appl. No. 13/839,026 dated Apr. 2, 2015.
Restriction Requirement for U.S. Appl. No. 14/857,071 dated Dec. 11, 2015.
Baker et al., “The Development of a Common Enumeration of Vulnerabilities and Exposures,” Presented at the Second International Workshop on Recent Advances in Intrusion Detection, Sep. 7-9, 1999, pp. 35.
Bhuyan et al., “Network Anomaly Detection: Methods, Systems and Tools,” First Quarter 2014, IEEE.
Crosby et al., “Efficient Data Structures for Tamper-Evident Logging,” Department of Computer Science, Rice University, 2009, pp. 17.
FireEye—Products and Solutions Overview, http://www.fireeye.com/products-and-solutions Printed Jun. 30, 2014 in 3 pages.
FireEye, http://www.fireeye.com/ Printed Jun. 30, 2014 in 2 pages.
Glaab et al., “EnrichNet: Network-Based Gene Set Enrichment Analysis,” Bioinformatics 28.18 (2012): pp. i451-i457.
Hur et al., “SciMiner: web-based literature mining tool for target identification and functional enrichment analysis,” Bioinformatics 25.6 (2009): pp. 838-840.
Lee et al., “A Data Mining and CIDF Based Approach for Detecting Novel and Distributed Intrusions,” Lecture Notes in Computer Science, vol. 1907 Nov. 11, 2000, pp. 49-65.
Ma et al., “A New Approach to Secure Logging,” ACM Transactions on Storage, vol. 5, No. 1, Article 2, Published Mar. 2009, 21 pages.
Schneier et al., “Automatic Event Stream Notarization Using Digital Signatures,” Security Protocols, International Workshop Apr. 1996 Proceedings, Springer-Veriag, 1997, pp. 155-169, https://schneier.com/paper-event-stream.pdf.
Schneier et al., “Cryptographic Support for Secure Logs on Untrusted Machines,” The Seventh USENIX Security Symposium Proceedings, USENIX Press, Jan. 1998, pp. 53-62, https://www.schneier.com/paper-secure-logs.pdf.
VirusTotal—About, http://www.virustotal.com/en/about/ Printed Jun. 30, 2014 in 8 pages.
Waters et al., “Building an Encrypted and Searchable Audit Log,” Published Jan. 9, 2004, 11 pages, http://www.parc.com/content/attachments/building_encrypted_searchable_5059_parc.pdf.
Zheng et al., “GOEAST: a web-based software toolkit for Gene Ontology enrichment analysis,” Nucleic acids research 36.suppl 2 (2008): pp. W385-W363.
Notice of Allowance for U.S. Appl. No. 14/033,076 dated Mar. 11, 2016.
Notice of Allowance for U.S. Appl. No. 14/223,918 dated Jan. 6, 2016.
Notice of Allowance for U.S. Appl. No. 14/473,860 dated Jan. 5, 2015.
Notice of Allowance for U.S. Appl. No. 14/823,935 dated Apr. 25, 2016.
Notice of Allowance for U.S. Appl. No. 14/970,317 dated May 26, 2016.
Official Communication for European Patent Application No. 14199180.2 dated Jun. 22, 2015.
Official Communication for European Patent Application No. 14199180.2 dated Aug. 31, 2015.
Official Communication for European Patent Application No. 15175106.2 dated Nov. 5, 2015.
Official Communication for European Patent Application No. 15180985.2 dated Jan. 15, 2016.
Official Communication for U.S. Appl. No. 14/223,918 dated Jun. 8, 2015.
Official Communication for U.S. Appl. No. 14/280,490 dated Jul. 24, 2014.
Official Communication for U.S. Appl. No. 14/479,863 dated Dec. 26, 2014.
Official Communication for U.S. Appl. No. 14/490,612 dated Jan. 27, 2015.
Official Communication for U.S. Appl. No. 14/490,612 dated Mar. 31, 2015.
Official Communication for U.S. Appl. No. 14/731,312 dated Apr. 14, 2016.
Official Communication for U.S. Appl. No. 14/823,935 dated Dec. 4, 2015.
Official Communication for U.S. Appl. No. 14/923,712 dated Feb. 12, 2016.
Official Communication for U.S. Appl. No. 14/970,317 dated Mar. 21, 2016.
Official Communication for U.S. Appl. No. 14/982,699 dated Mar. 25, 2016.
Official Communication for U.S. Appl. No. 15/071,064 dated Jun. 16, 2016.
Official Communication for European Patent Application No. 14200246.8 dated Oct. 19, 2017.
Official Communication for European Patent Application No. 14200246.8 dated May 29, 2015.
Official Communication for Great Britain Patent Application No. 1408025.3 dated Nov. 6, 2014.
Official Communication for U.S. Appl. No. 14/504,103 dated Mar. 31, 2015.
Perdisci et al., “Behavioral Clustering of HTTP-Based Malware and Signature Generation Using Malicious Network Traces,” USENIX, Mar. 18, 2010, pp. 1-14.
Official Communication for European Patent Application No. 15193287.8 dated Oct. 19, 2017.
Official Communication for U.S. Appl. No. 15/378,567 dated Jun. 30, 2017.
Official Communication for U.S. Appl. No. 15/378,567 dated Feb. 14, 2018.
Related Publications (1)
Number Date Country
20170237755 A1 Aug 2017 US
Continuations (3)
Number Date Country
Parent 14816748 Aug 2015 US
Child 15419718 US
Parent 14479863 Sep 2014 US
Child 14816748 US
Parent 14147402 Jan 2014 US
Child 14479863 US