The disclosed embodiments relate generally to systems and methods for assessing and remediating security risks in large distributed systems having many computers or machines interconnected by one or more communication networks.
In large corporate networks, and the networks of machines/computers used by large organizations, the numbers of managed machines can number in the tens or hundreds of thousands, or even more, making security assessment and management very challenging. Administrators (e.g., “users” having the authority to identify and mitigate (or remediate) security risks in a network of machines/computers) are typically presented with an ever expanding list of alerts about suspicious behavior and security risks to mitigate. Conventional methods typically only show how many systems are affected by a respective vulnerability, alert or security risk, or show the severity of a vulnerability (e.g., using a score produced using a Common Vulnerability Scoring System (CVSS)). With this information, users (e.g., system administrators, security analysts) typically will address the security risk or vulnerability that affects the most machines, or address a vulnerability that has a high CVSS score first. However, these traditional approaches lead to issues where vulnerabilities with lower CVSS scores and security risks that affect only a small number of machines aren't brought to the user's attention, even if the vulnerability in combination with indirect administrative rights and/or lateral movement through the network presents a significant security risk to the distributed system.
Mitigating security risks in such a manner may not be the most effective way of protecting the distributed system, because there may be an incident where only one machine has a particular low CVSS vulnerability, but when compromised could allow an attacker to gain access to other machines via lateral movement (e.g., making use of administrative rights information to compromise other machines in the distributed system) that contain sensitive information (e.g., social security information; confidential financial or banking information; confidential organizational, personal or technical information; etc.). Alternatively, there can be situations where a respective vulnerability affects a large number of systems, but is less critical than other vulnerabilities because the respective vulnerability, even if leveraged by an attacker, would not lead to sensitive information or important services being compromised. Therefore, under traditional approaches, security risks that require the most attention may not be brought to the system administrator's, or their criticality may not be accurately indicated in the information presented to the system administrator(s).
Accordingly there is a need to prioritize security risks not by the number of machines they affect, but rather based on the potential damage to the distributed system presented by those security risks, and to present the system administrator with easy access to tools for obtaining additional information about the prioritized security risks and to tools for remediating those risks. To that end, in one aspect, a server system includes: one or more communications interfaces for coupling the server system to N machines in a collection of machines via one or more communication networks, where N is an integer greater than 10, one or more processors, memory storing one or more programs, wherein the one or more programs include instructions for performing a set of operations.
Those operations, performed by the server system, include obtaining, at least in part from the N machines, system risk information that includes administrative rights information, identifying users and groups of users having administrative rights to respective machines of the N machines, and at least two of the following categories of information: open session information identifying open sessions between respective users and respective machines in the N machines; vulnerability information for vulnerabilities, in a set of predefined vulnerabilities, present at respective machines in the N machines; and missing patch information identifying missing software patches at respective machines in the N machines.
The operations performed by the server system further include identifying, for each respective machine in a first subset of the N machines, logically coupled machines, comprising machines of the N machines logically coupled to the respective machine via lateral movement, wherein lateral movement comprises access to the respective machine via one or more other machines using said administrative rights; and determining, for each respective machine in a second subset of the N machines, machine risk factors including one or more machine risk factors determined in accordance with the system risk information, and one or more lateral movement values, each lateral movement value corresponding to a number of logically coupled machines, logically coupled via lateral movement to the respective machine.
The operations performed by the server system further include generating, for each machine in at least a third subset of the N machines, a machine risk assessment value, wherein the machine risk assessment value is determined for each machine in the third subset based, at least in part, on a combination of the one or more machine risk factors and the one or more lateral movement values; presenting in a first user interface a sorted list of a least a fourth subset of the N machines, sorted in accordance with the machine risk assessment values generated for the machines in the third subset of the N machines, wherein the first user interface includes, for each respective machine in at least a subset of the machines listed in the first user interface, a link for accessing additional information about risk factors associated with the machine and for accessing one or more remediation tools for remediating one or more security risks associated with the respective machine; and performing a respective security risk remediation action in accordance with user selection of a respective remediation tools of the one or more remediation tools.
Like reference numerals refer to corresponding parts throughout the drawings.
It will also be understood that, although the terms “first,” “second,” etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first contact could be termed a second contact, and, similarly, a second contact could be termed a first contact, without changing the meaning of the description, so long as all occurrences of the “first contact” are renamed consistently and all occurrences of the second contact are renamed consistently. The first contact and the second contact are both contacts, but they are not the same contact.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the claims. As used in the description of the embodiments and the appended claims, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term “and/or” as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
As used herein, the phrase “at least one of A, B and C” is to be construed to require one or more of the listed items, and this phase reads on a single instance of A alone, a single instance of B alone, or a single instance of C alone, while also encompassing combinations of the listed items such as “one or more of A and one or more of B without any of C,” and the like.
As used herein, the term “if” may be construed to mean “when” or “upon” or “in response to determining” or “in accordance with a determination” or “in response to detecting,” that a stated condition precedent is true, depending on the context. Similarly, the phrase “if it is determined [that a stated condition precedent is true]” or “if [a stated condition precedent is true]” or “when [a stated condition precedent is true]” may be construed to mean “upon determining” or “in response to determining” or “in accordance with a determination” or “upon detecting” or “in response to detecting” that the stated condition precedent is true, depending on the context.
Reference will now be made in detail to various embodiments, examples of which are illustrated in the accompanying drawings. In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the invention and the described embodiments. However, the invention may be practiced without these specific details. In other instances, well-known methods, procedures, components, and circuits have not been described in detail so as not to unnecessarily obscure aspects of the embodiments.
Networks Using Linear Communication Orbits
In some embodiments, the machines in a distributed system communicate using communication networks not organized as linear communication orbits, and instead communicate using one or more local area networks and wide area networks. However, since some embodiments of the security risk assessment and remediation systems described herein utilize linear communication orbits, networks of machines that utilize linear communication orbits are described herein with reference to
Optionally, machines 102 in managed network 100 are distributed across different geographical areas. Alternatively, machines 102 are located at the same physical location. A respective machine 102 communicates with another machine 102 or the server 108 using one or more communication networks. Such communications include communications during normal operations (e.g., user-level operations, such as emailing, Internet browsing, VoIP, database accessing, etc.). The communication network(s) used can be one or more networks having one or more type of topologies, including but not limited to the Internet, intranets, local area networks (LANs), cellular networks, Ethernet, Storage Area Networks (SANs), telephone networks, Bluetooth personal area networks (PAN) and the like. In some embodiments, two or more machines 102 in a sub-network are coupled via a wired connection, while at least some of machines in the same sub-network are coupled via a Bluetooth PAN.
Machines 102 in managed network 100 are organized into one or more contiguous segments 106 (including 106a-c), each of which becomes a sub-network in the managed network 100. In some embodiments, each contiguous segment 106 is a respective linear communication orbit that supports system, security and network management communications within the managed network 100. Furthermore, each contiguous segment 106 includes one head node (e.g., head node 102a), one tail node (e.g., tail node 102b), and a sequence of zero or more intermediate client nodes (e.g., intermediate node(s) 102c) in between the head node and the tail node. In some embodiments, both the head node and tail node of a contiguous segment 106a are coupled to server 108, while the intermediate nodes of the contiguous segment 106a are not coupled to server 108. In some embodiments, only the head node of a contiguous segment 106b is coupled to the server 108, while the intermediate nodes and tail node are not coupled to the server 108.
In some embodiments, server 108 is coupled to a remote a remote server (e.g., remote server 110) that is not part of managed network 100 and is optionally separated from managed network 100 a wide area network 111, such as the internet. For example, server 108 may receive from remote server 110 files or other information that it then distributes to computational machines 102 in managed network 100.
In some embodiments, all machines 102 coupled to a linear communication orbit 106 in network 100 are sorted into an ordered sequence according to a respective unique identifier associated with each machine 102. For example, respective IP addresses of machines 102 are used to sort the machines into an ordered sequence in the linear communication orbit. Each machine is provided with a predetermined set of rules for identifying its own predecessor and/or successor nodes given the unique identifiers of its potential neighbor machines. When a machine joins or leaves the linear communication orbit, it determines its ordinal position relative to one or more other machines in the linear communication orbit according to the unique identifiers and the aforementioned rules. More details on how a linear communication orbit is organized and how each intermediate node, head node or end node enters and leaves the linear communication orbit are provided in the Applicants' prior application, U.S. patent application Ser. No. 13/797,962, filed Mar. 12, 2013, entitled “Creation and Maintenance of Self-Organizing Communication Orbits in Distributed Networks,” which is hereby incorporated by reference in its entirety.
Linear communication orbits, such as exemplary linear communication orbits 106a-106c, are established and maintained to facilitate system, security and/or network management operations ascribed to manual and programmed administration of network 100. Examples of system, security and network management operations include: (1) collecting status information (e.g., bandwidth, load, availability, resource inventory, application status, machine type, date of last update, security breach, errors, etc.) from individual machines of the managed network; (2) issuance of system, security and network management commands (e.g., commands related to shutdown, restart, failover, release of resources, change access authorizations, backup, deployment, quarantine, load balancing, etc.) for individual resources and/or machines on the managed network; (3) file distribution, including software installations and updates; (4) detecting presence of particular malicious programs (e.g., viruses, malware, security holes, etc.) on individual machines on the managed network; (5) removal of or disabling particular malicious programs (e.g., viruses, malware, security holes, etc.) on individual machines on the managed network; (6) disabling or suspending suspicious or high-risk operations and activities (e.g., Internet or operating system activities of suspected virus, malware, etc.) on particular machines on the managed network; (7) detecting unmanaged machines coupled to the managed network; (8) detecting data leakage (e.g., transmission of classified information) from machines on the managed network to locations or machines outside of the managed network; (9) detecting connection or data transfer to/from removable data storage devices (e.g., memory stick, or other removable storage devices) from/to particular ports (e.g., a USB drive) of particular machines on the managed network. Other system, security and network management operations are possible, as will be apparent to those of ordinary skills in the art.
In some implementations, a system management message is initially issued by the server (e.g., server 108) in the managed network, and includes a command (e.g., a command to obtain and install a specific object, such as a data file or software application, or software update) and a rule. In some embodiments, the system management message is initially received at a head node of an established linear communication orbit, as is then passed along the linear communication orbit to each node in the linear communication orbit until it reaches the tail node of the linear communication orbit. Each node of the linear communication orbit is sometimes called a computational machine; alternatively, each node of the linear communication orbit includes a computational machine.
The rule in the system management message is interpreted by each computational machine in the orbit, which determines whether that machine should execute the command. Alternately stated, the rule, when interpreted by each computational machine in the orbit, determines whether that machine needs a specific object, such as a data file, an application, an update, or the like. If the particular machine 102 determines that it satisfies the rule, and thus needs the object, it generates a plurality of data requests to request a plurality of shards, as described in more detail below. Each of the data requests is a request for respective specific data, herein called a shard. Together the shards form, or can be combined to form, an object, such as a file, an application, or an update.
Each data request propagates along a predetermined data request path that tracks the linear communication orbit until the requested respective specific data are identified at a respective machine that stores a cached copy of the requested respective specific. The respective specific data are thereafter returned, along a data return path, to the computational machine 102 that made the request. Moreover, the requested specific data are selectively cached at the other computational machines located on the path to facilitate potential future installations and updates at other machines on the linear communication orbit. During the entire course of data caching and distribution, each individual computational machine on the linear communication orbit follows a predetermined routine to independently process system management messages, respond to any incoming data request and cache specific data that passes through.
Data distribution in the linear communication orbit(s) 106 shown in
In accordance with many embodiments of the present invention, data caching and distribution is performed using local resources of a linear communication orbit, except when none of the machines on the linear communication orbit have the specific data requested by a machine on the linear communication orbit. Bandwidth on the local area network, which interconnects the machines on the linear communication orbit, is typically very low cost compared with bandwidth on the wide area network that connects a server to one or more machines on the linear communication orbit. In many implementations, the local area network also provides higher data communication rates than the wide area network. Distributed, automatic local caching minimizes or reduces the use of a wide area communication network to retrieve data from a server, particularly when multiple machines on the same linear communication orbit make requests for the same data. As a result, overall performance of a large-scale managed network is improved compared with systems in which each managed machine obtains requested data via requests directed to a server via a wide area network. Due to local caching of content, and locally made caching decisions, far fewer file servers are needed in systems implementing the methods described herein, compared with conventional, large-scale managed networks of computational machines.
Each machine on the linear communication orbit is coupled to an immediate neighbor machine or a server via a bidirectional communication link that includes a data incoming channel and a data outgoing channel. Intermediate machine 114 has four communication channels (e.g., channels 134-140) that form two bidirectional communication links to couple itself to two respective neighbor machines (e.g., predecessor node 124 and successor node 126). Head node 118 and tail node 122 are similarly coupled to a respective neighbor machine, and in some embodiments to a server, via bidirectional communication links. The bidirectional communication links allow each machine to simultaneously receive information from and provide information to its adjacent machine upstream or downstream in the linear communication orbit. The upstream direction, relative to a machine other than the head node, is the direction of communication or message flow toward the head node. For the head node, the upstream direction is toward the server to which the head node is coupled. The downstream direction, relative to a machine other than the head node, is the direction of communication or message flow toward the tail node. In some embodiments, for the tail node, the downstream direction is undefined. In some other embodiments, for the tail node, the downstream direction is toward the server to which the tail node is coupled.
As shown in
In various embodiments of the present invention, specific information communicated along the forward or backward communication channel may originate from a server, a head node, a tail node or an intermediate machine (e.g., machine 114 in
In accordance with some embodiments of the present invention, specific information that may be communicated in the forward and backward communication channels includes, but is not limited to, system management messages, data requests, specific data (e.g., data requested by the data requests) and status messages. In one example, a system management message is an installation or update message issued by the server, and transferred along the forward communication channel to various client nodes on the linear communication orbit. At a respective client node that receives the installation or update message, the message is parsed into a plurality of data requests each of which may be circulated within the linear communication orbit to request respective specific data. The requested respective specific data are returned to the client node, and distributed to other client nodes along the forward or backward communication channel as well. More details will be presented below concerning specific data caching and distribution processes, such as for installations and updates of applications, databases and the like.
Security Risk Assessment and Remediation
Having user interfaces that organize data based on security risk assessments allows an administrator (e.g., a “user” having the authority to identify and mitigate (or remediate) security risks in a network of machines/computers) to better remediate the vulnerabilities that are currently or potentially causing the largest amount of comprise to the systems within a distributed system, sometimes herein called “a collection of machines,” “a network” or “a network of systems.” Traditional vulnerability ranking systems, have typically been either based on the number of systems, sometimes herein called machines, with the same vulnerability, or the severity of the vulnerability (e.g., a CVSS Common Vulnerability Scoring System (CVSS)). However, these two criteria do not paint an accurate enough picture of how the vulnerabilities can affect a network of systems. For example, an administrator may identify a first vulnerability that has a high CVSS and affects a large number of machines, but if those systems do not contain any sensitive information (e.g., social security numbers; confidential financial or banking information; confidential organizational, personal or technical information; etc.), then the security risk associated with the identified vulnerability may be low or modest. On the other hand, there may be a single system having a second vulnerability with a medium CVSS, but if compromised could allow an attacker to gain access to sensitive information, either in that machine or in one or more other machines that can be accessed indirectly (through lateral movement, discussed in more detail below) due to the vulnerability. At this time, there are no security risk assessment tools that help an administrator to assess or identify such a scenario, and as a result an administrator would likely choose to correct the vulnerability (e.g., by applying a patch, and/or killing a session) that affect the most systems in the network first, instead of correcting the vulnerabilities that could potentially lead to the most devastating compromise.
In some embodiments, the security risk assessments, the generation or presentation of user interfaces, and the performance of remedial actions, are performed by a server system coupled to N machines in a collection of machines via one or more communication networks, where N is an integer greater than 10, 100, or 1000. The server system includes one or more communications interfaces for coupling the server system to the N machines in the collection of machines via the one or more communication network; one or more processors; and memory storing one or more programs, wherein the one or more programs include instructions for performing the operations described below.
In some embodiments, the operations performed by the server system include obtaining, at least in part from the N machines, system risk information that includes administrative rights information, identifying users and groups of users having administrative rights to respective machines of the N machines, and at least two of the following categories of information: open session information identifying open sessions between respective users and respective machines in the N machines; vulnerability information for vulnerabilities, in a set of predefined vulnerabilities, present at respective machines in the N machines; and missing patch information identifying missing software patches at respective machines in the N machines.
In some embodiments, the operations performed by the server system further include identifying, for each respective machine in a first subset of the N machines, logically coupled machines, comprising machines of the N machines logically coupled to the respective machine via lateral movement, wherein lateral movement comprises access to the respective machine via one or more other machines using said administrative rights.
In some embodiments, the operations performed by the server system further include operations specific to generating the risk assessment values, user interfaces, and links to remedial actions describe below with reference to
Vulnerability Risk Assessment and Remediation
With the above scenario in mind,
While
While
Each machine/system in the distributed system can be assigned (e.g., by the server system) a system rating that is assigned either automatically or by an administrator. In some embodiments, a machine's rating reflects or is assigned in accordance with the machine's importance within the network. For example, a low system rating may be assigned to a front-desk system that does not contain any sensitive information. A medium rating may be assigned to a system that is used by a user in the accounting department. A high risk rating may be assigned to a central server that contains an administrative rights store (also sometimes called an identity, authentication and authorization store, user accounts credentials store, or active directory information store), banking information, an important project, or social security information. It is worth noting that these categorizations can be subjective, and can be changed based on the needs of the owners or administrators of the distributed system. From another viewpoint, a system rating can be assigned to a respective system based on vulnerability severity of the system, where the vulnerability severity of the system can be determined based on whether the system contains (or has access to) sensitive information and/or provides important services that might be compromised by the system's vulnerabilities.
As briefly discussed earlier, for the vulnerabilities in a set of predefined vulnerabilities, both a direct vulnerability risk score, and a derivative vulnerability risk score can be determined. A direct vulnerability risk score is a score based on how many systems are directly affected by the vulnerability (e.g., how many systems have a recognized vulnerability that is not network dependent). A derivative vulnerability risk score, is a score that is based on how many systems could be compromised, using the vulnerability associated with the risk score, if an attacker were to compromise a single system and exploit all lateral movement possibilities through the network to access further systems. In one example, as shown in
Direct Vulnerability Risk Score=(Au*1+Al*2+Am*5+Ah*10)*V*P
Derivative Vulnerability Risk Score=((Au*1+Al*2+Am*5+Ah*10)*V*P)+(Lu*1+L1*2+Lm*5+Lh*10)
The risk factors (sometimes called variables) in these example equations are discussed below. Since, the derivative vulnerability risk score equation 212-1 overlaps with the direct vulnerability risk score equation 210-1, the direct vulnerability risk score equation 210-1 will be discussed first. The risk factors included within the brackets of the direct vulnerability risk score equation 210-1 are weighted based on the importance of each system having the vulnerability for which a risk score is being generated. Below is a list of risk factors and information on how these risk factors are determined and what weighting is associated with them:
In some embodiments, the computer system severity rating or computer system importance rating is based on whether (or the extent to which) the computer system stores or has access to sensitive information or provides important services. Thus, in some embodiments, computer systems that provide certain predefined services, or that store or have access to particular predefined types of sensitive information are automatically (e.g., through the use of a rule) as having a severity rating or importance rating no lower than a corresponding value. The use of such rules to categorize computer systems with respect to importance or vulnerability severity can substantially reduce the burden on administrators to categorize the machines or computer systems in their networks.
In some embodiments, the values of vulnerability risk factors described above, for any respective vulnerability, are determined through queries to the endpoint machines capable of having the vulnerability, sent via the linear communication orbits, thereby enabling efficient collection of information identifying the endpoint machines having the respective vulnerability. In some embodiments, the severity rating of the vulnerability at each machine is also obtained using the same queries, and the resulting collected data is used to determine or generate the values of vulnerability risk factors described above.
In addition, in this example, the weighted summation of the risk factors (e.g., Au, Al, Am and Ah) representing counts of systems having the vulnerability for which a risk score is being generated (i.e., what is contained within the brackets of the direct vulnerability risk score equation 210-1) is multiplied by at least one of the following two factors:
After direct vulnerability risk scores 206-1 and 206-2 are determined for multiple vulnerabilities, the vulnerabilities are (or can be, in response to a user command) sorted in the user interface shown in
Turning to the Derivative Vulnerability Risk Score equation 212-1, the derivative vulnerability risk scores are calculated by taking the direct vulnerability risk score equation 210-1 and adding to that score a weighted summation of additional items, sometimes called derivative vulnerability risk factors. The values of the additional items are each multiplied by a respective predetermined or assigned weight in order to weight their importance or contribution to the derivative vulnerability risk score for each vulnerability for which a derivative vulnerability risk score is being generated. In this example, the additional risk factors are:
In some embodiments, the plurality of vulnerability risk factors includes one or more lateral movement values (e.g., derivative vulnerability risk factors, examples of which are provided above), each lateral movement value corresponding to a number of logically coupled machines, logically coupled via lateral movement to any of the machines affected by the respective vulnerability, and having a corresponding predefined importance rating or corresponding vulnerability severity rating. Methods and structures for generating lateral movement values, also sometimes herein called derivative risk factors, are discussed below.
In some embodiments, the plurality of vulnerability risk factors includes a combination (e.g., a linear combination, an example of which is provided above) of two or more lateral movement values, each lateral movement value corresponding to a number of logically coupled machines, logically coupled via lateral movement to any of the machines affected by the respective vulnerability, and having a corresponding predefined importance rating or corresponding vulnerability severity rating.
Using the vulnerability risk factors, risk scores (i.e., direct and derivative risk scores) are determined for each vulnerability in a set of predefined vulnerabilities. As shown in
As shown in
After the server system (e.g., in response to a command from the administrator, or by default) sorts the vulnerabilities based on the direct vulnerability risk score 210 or the derivative risk score 212, the user interface of
The remedial action 234 region (e.g., column) included in the user interface 201 includes one or more remedial action buttons for each vulnerability (e.g., a remedial action button for vulnerability 1236 and a remedial action button for vulnerability 2238). While the remedial actions applicable to any respective vulnerability are usually vulnerability specific, the same types of remedial measures are applicable to many vulnerabilities. There are many types of vulnerabilities, and corresponding types of remedial measures, and thus the following list of types of remedial measures is not exhaustive. Examples of types of remedial measures are:
To give the administrator a better understanding of how lateral movement occurs, and where a respective remedial action could be performed, a graph can be displayed in the user interface. The graph can be accessed in numerous ways, but for the sake of this example, the administrator can select any one of the following risk factors (Au, Al, Am, Ah, Lu, Ll, Lm, or Lh) for any one of the vulnerabilities to bring up a corresponding graph. Depending on the risk factor the administrator has selected, a different graph or portion of a graph may be displayed in the user interface, or the manner of presentation of the graph or the information shown in the graph may differ.
Using the user interface of
An additional optional listing or set of remedial actions 315 can be placed near or next to either the “User 1” symbol 316 associated with “User 1” 304, the Group 1 symbol 318 associated with the “Group 1” 306, or the Computer 3 symbol 320 associated with “Computer 3” 310. For illustrative purposes this optional listing or set of remedial actions 315 is placed next Group 1 symbol 318, associated with the “Group 1” 306. An example of a remedial action in the optional listing or set of remedial actions 315 is removing a user or a subgroup of users from an administrative group, so as to reduce opportunities for lateral movement between systems. Another example would be to adjust the administrative rights of the users in an administrative group, for example, reducing the administrative rights granted to the users in that administrative group.
Lateral Movement Values Determination
In some embodiments, the lateral movement values (sometimes herein called derivative risk factors), used in determining some of the risk scores discussed in this document, are determined (e.g., by a respective server system, such as remote server 110,
For example, in some embodiments a node data structure, or a set of node data structures, is used to store data representing each known asset in the system. In another example, relationship data structures are used to represent (e.g., store information representing) the relationship of each asset (corresponding to a node in the graph or graphs) to every other asset to which it has a direct or indirect relationship. The assets to which a respective asset has a direct or indirect relationship are sometimes called reachable assets, as they are reachable from the respective assets. For a respective user, the relationship data structures store information denoting each group to which the respective user belongs, each machine to which the respective user has an open session, each machine to which the respective user has access, and the nature of that access.
In the graph or graphs, there is a path between each pair of assets that have a relationship, and that path may have one or more path segments.
In some embodiments, with respect to each respective asset (e.g., each known user, group of users and machine in the system), each relationship with another asset in the system (e.g., any asset reachable from the respective asset) is represented (e.g., in a respective relationship data structure for each such node) by an identifier of the other node and relationship descriptor, which includes a set of flags that indicate the nature of the relationship, which in turn is based on the types of path segments connecting the two assets in the graph. For example, the flags can be include some of all of the following:
In the process of generating the data structures representing the portion of the graph between a respective asset and all other assets reachable from the respective asset, when adding the information for a “new” asset, that is reachable (from the respective asset) via one or more assets for which a relationship descriptor has already been generated, the relationship descriptor the new asset can be determined, at least in part, by combing the relationship descriptors for the assets in the path between the respective asset and the new asset.
Once the relationship data structures used to represent the relationships between an initial asset of set of assets and every other each asset to which the initial asset of set of assets has a direct or indirect relationship have been generated, the lateral movement values needed for any of the derivative risk scores described in this document can be computed using the information in the node data structures and relationship data structures, both to identify the reachable assets and to count the numbers of reachable assets that meet the requirements for each lateral movement value.
It is noted that storing the full path from every asset to every other asset in a large distributed system would typically require considerably more memory than is practical, and would make determining lateral movement values expensive for large scale systems (e.g., system having tens or hundreds of thousands of machines). For this reason, in some embodiments the relationship descriptors are each represented as a set of bit values, one for each flag in the relationship descriptor. In this way, the relationship descriptors for a reachable asset can be stored in just one or two bytes of storage.
To prevent the graphing computation process from getting caught in lookup loops, a reachable asset is added to the set of known reachable assets for a given respective asset only when either (A) the reachable asset is not already represented in the data structures as a reachable assets, or (B) a shorter path to that reachable asset than the paths already represented in the data structures, has been found, in which case the relationship descriptor for the reachable asset is updated based on the shorter (more direct) path.
In some embodiments, the determination of reachable assets, and the determination of the more direct path to each of those reachable assets, is performed for every known asset in the system. This can lead to a large amount of duplicate work, as many paths (e.g., portions of paths) are traversed repeatedly in making those determinations. Furthermore, simultaneously storing information representing all reachable assets for all known assets in a large system is not practical. As a result, once all the needed counts of assets have been determined, for determining the lateral movement values needed to generate risk scores for a group of assets or a set of vulnerabilities, the data structures representing the reachable assets is typically discarded to make room for the data structures representing the reachable assets of another group of assets. However, to make this process more efficient, in some embodiments, the data structures representing the reachable assets for the largest groups of users are cached (e.g., stored in a fixed amount memory and retained), until the lateral movement values and/or risk scores have all been computed, or other cache eviction criteria are satisfied.
During graph traversal, for generating data representing all the reachable assets for a given asset or group of assets, whenever a cached group is encountered, the stored reachable assets are copied to (or used temporarily as) the data structures representing the reachable assets for the given asset or group of assets.
In some embodiments, data structures representing all reachable assets of the groups with the largest numbers of direct members are added to a fixed size cache (e.g., a cache having a configurable size, but that remains at the configured size unless and until an administrator changes the configured size), starting with the largest groups, and progressing to smaller groups, until the cache is full. In some embodiments, the computation of reachable assets, and the corresponding relationship descriptors, for the largest groups in the system is performed first, making that information available during the computation of reachable assets and lateral movement values for the entire system. The reachable assets from a group are the same for all members of the group, so maintaining a cache of the reachable assets for large groups improves the efficiency of determining reachable assets and lateral movement values derived from the data representing the reachable assets.
In some embodiments, many assets' only connection to reachable assets is through a single large group or a small number of large groups. In some embodiments, for each such asset that is a direct member of one or more cached groups, to further reduce the amount of work required to generate counts of various types of reachable assets (which, in turn are used to generate lateral movement values), counts of reachable assets are initialized to the values (e.g., counts of various types of reachable assets) for the largest cached group of which it is a direct member.
Group Risk Assessment and Remediation
Allowing an administrator (or other user with the authority to identify and remediate security risks) to have different perspectives (i.e., viewpoints) of how a security risk could affect the network, allows the administrator to better remediate the security risks that are currently or potentially causing the largest amount of comprise to the systems within a distributed system. While the user interface of
While the sorted list based on administrative rights information is discussed in a general way, additional details regarding the sorted list will be discussed below and in reference to both
While
While
As discussed above, each system in the network can be assigned, either automatically or by an administrator, a system rating.
As briefly discussed earlier, for the user groups identified in the administrative rights information, both a direct group risk assessment value, and a derivative group risk assessment value can be determined. It is noted that in some implementations, the administrative rights information is obtained from the Active Directory (trademark of Microsoft) service that runs on Windows Server (trademark of Microsoft), but in other implementations is obtained from an identity, authentication and authorization store that enables administrators to manage permissions and access to network resources.
A direct group risk assessment value (also sometimes called a direct user group assessment value) is based on a count of machines to which the group has direct administrative rights (i.e., access). A derivative group risk assessment value (also sometimes called a derivative user group assessment value), is a value (i.e., score) that is based in part on how many systems can be indirectly accessed by users in the group, using lateral movement; this value is useful in that it indicates how many systems could reached, or the aggregate importance of systems that could be reached, and compromised by an attacker who gains access to the credentials of any user in the group. In one example, as shown in
Direct Group Risk Score=((Du*1+DI*2+Dm*5+Dh*10)+(lu*1+ll*2+lm*5+lh*10)+S)*M)
Derivative Group Risk Score=((Du*1+Dl*2+Dm*5+Dh*10)+(lu*1+ll*2+lm*5+lh*10)+S)*M+(Lu*1+Ll*2+Lm*5+Lh*10)
The risk factors (sometimes called variables) in these example equations are discussed below. Since, the derivative group risk assessment equation 412-1 overlaps with the direct group risk assessment equation 410-1, the direct group risk assessment equation 410-1 will be discussed first. The risk factors included within the first interior brackets (i.e., the leftmost interior bracket) of the direct user group risk assessment equation 410-1 are weighted based on the importance of each system to which the group (for which a group risk assessment value is being generated) has direct administrative rights. Below is a list of direct risk factors and information on how these direct risk factors are determined and what weighting is associated with them:
In addition, in this example, the weighted summation of the direct risk factors (e.g., Du, Dl, Dm, and Dh) representing counts of systems to which that the group has administrative rights (i.e., what is contained within the leftmost interior bracket of the direct group risk assessment equation 410-1) is added to a weighted summation of indirect risk factors (e.g., lu, ll, Im, and lh) representing counts of systems to which the group has indirect administrative rights (i.e., what is contained within the rightmost interior brackets of the direct group risk assessment equation 410-1). Indirect administrative rights are administrative rights are the administrative rights of other users that the user can exploit through membership in one or more administrative groups.
In addition to the summation of the direct and indirect risk factors, the number of open sessions is also added to the summation of the direct and indirect risk factors. The number of open sessions 430 are the number of sessions that the users of the group currently have running. In some instances, a single user within the group may have one or more sessions running at the same time (e.g., signed into multiple devices). In some embodiments, a weighted summation of the direct risk factors (e.g., Du, Dl, Dm, and Dh), the indirect risk factors (e.g., lu, ll, lm, and lh), and the number of sessions 430, is multiplied by the number of group members 439 that are a part of the respective group for which the group risk assessment value is being generated. The resulting value is the direct group risk assessment value 410.
After direct group risk assessment values 410 (e.g., 406-1 and 406-2) are determined for multiple user groups, the user groups are (or can be, in response to a user command) sorted in the user interface shown in
Turning to the derivative group risk assessment equation 412-1, the derivative group risk assessment values are calculated by taking the direct group risk assessment equation 410-1 and adding to that score a weighted summation of the additional items, sometimes called derivative group risk factors, derivative risk factors, or lateral movement values. The determination of such values is discussed some detail elsewhere in this document. The values of the additional items are each multiplied by a respective predetermined or assigned weight in order to weight their importance or contribution to the derivative group risk assessment value for each group for which the derivative group risk assessment value is being generated. It is noted that the additional items listed here are distinct from the additional items shown in
Using the derivative group risk assessment factors (e.g., direct risk factors, indirect risk factors, number of sessions, number of members in the group, and derivative risk factors), group risk assessment values (e.g., direct and derivative risk scores) are determined for each group in a set of predefined groups. As shown in
As shown in
After the server system (e.g., in response to a command from the administrator, or by default) sorts the groups either based on the direct group risk assessment values 410 or the derivative group risk assessment values 412, the user interface of
The remedial action 440 region (e.g., column) included in the user interface 401 includes one or more remedial action buttons for each group (e.g., a remedial action button for Group 1442 and a remedial action button for Group 2444). While the remedial actions applicable to any respective group are usually group specific, the same types of remedial measures are applicable to many groups. There are many types of security risks affecting user groups that share administrative rights, and corresponding types of remedial measures, and thus the following list of types of remedial measures is not exhaustive. Examples of types of remedial measures are:
To give the administrator a better understanding of how lateral movement occurs, and where a respective remedial action could be performed, a graph can be displayed in the user interface. The graph can be accessed in numerous ways, but for the sake of this example, the administrator can select any one of the following risk factors (Au, Al, Am, Ah, lu, ll, lm, lh, Lu, Ll, Lm, or Lh) for any one of the user groups to bring up a corresponding graph. Depending on the risk factor the administrator has selected, a different graph or portion of a graph may be displayed in the user interface, or the manner of presentation of the graph or the information shown in the graph may differ.
Using the user interface of
An additional optional listing or set of remedial actions 515 can be placed near or next to either the “Computer 1” symbol 516 associated with “Computer 1” 502, the User 1 symbol 518 associated with the “User 1” 504, or the Computer 3 symbol 520 associated with “Computer 3” 510. For illustrative purposes this optional listing or set of remedial actions 515 is placed next the User 1 symbol 518, associated with “User 1” 504.
Machine Risk Assessment and Remediation
Allowing an administrator to have different perspectives (i.e., viewpoints) of how a security risk could affect the network, allows the administrator to better remediate the security risks that are currently or potentially causing the largest amount of comprise to the systems within a distributed system. While the user interface of
While the sorted list based on system risk information has been discussed in a general way, additional details regarding the sorted list in user interface 601 will be discussed below, with reference to
While
While
As discussed above, each system in the network can be assigned either automatically or by an administrator, a system rating. These system ratings are then later used as one or more factors in determining the derivative machine risk assessment values.
As briefly discussed earlier, both a direct machine risk assessment value, and a derivative machine risk assessment value can be determined for each machine in at least a subset of the machines in the distributed system. A direct machine risk assessment value for a respective machine is a value (e.g., a score) based on one or more security risks that are present on that respective machine. A derivative machine risk assessment value for a respective machine is a value (e.g., a score) that is in part based on the number of systems that can be used to indirectly access the respective machine, using lateral movement; alternatively, the derivative machine risk assessment value for the respective machine is a value (e.g., a score) that indicates the number of systems that could be compromised, via lateral movement, if the respective machine's security is breached. This value is useful in that it indicates how many systems could be used, or the aggregate importance of systems that could be reached, and compromised by an attacker who gains access to the credentials for the respective machine. In one example, as shown in
Direct Machine Risk Score=D+I+S+P+Vl*1+Vm*2+Vh*5
Derivative Machine Risk Score=(D+I+S+P+Vl*1+Vm*2+Vh*5)+(Lu*1+Ll*2+Lm*5+Lh*10)
The machine risk factors (sometimes called variables) in these example equations are discussed below. Since, the derivative machine risk assessment equation 612-1 overlaps with the direct machine risk assessment equation 610-1, the direct machine risk assessment equation 610-1 will be discussed first. The factors below are summed together to generate the direct machine assessment value (sometimes herein called the direct machine risk score). Below is a list of how these direct machine risk factors are determined and what weighting is associated with them:
After direct machine risk assessment values 606-1 and 606-2 are determined for multiple machines, the machines are (or can be, in response to a user command) sorted in the user interface shown in
Turning to the derivative machine risk assessment equation 612-1, the derivative machine risk assessment values (also herein called derivative machine risk scores) are calculated by taking the direct machine risk assessment equation 610-1 and adding to that score a weighted summation of additional items, sometimes called derivative machine risk factors, derivative risk factors, or lateral movement values. The determination of such values is discussed some detail elsewhere in this document. The values of the additional items are each multiplied by a respective predetermined or assigned weight in order to weight their importance or contribution to the derivative machine risk assessment value for each machine for which a derivative vulnerability risk score is being generated. It is noted that the additional items listed here are distinct from the additional items shown in
Using the machine risk assessment factors (e.g., direct machine risk factors and the derivative machine risk factors), machine risk assessment values (i.e., direct and derivative risk scores) are determined for each machine in a set of machines (e.g., a subset of the machines in the distributed system). As shown in
As shown in
After the server system (e.g., in response to a command from the administrator, or by default) sorts the machines either based on the direct machine risk assessment values 610 or the derivative machine risk assessment values 612, the user interface of
The remedial action 640 region (e.g., column) included in the user interface 601 includes one or more remedial action buttons for each machine (e.g., a remedial action button for Comp. 1636 and a remedial action button for Comp. 2638) for presenting remedial actions that the administrator may apply to that machine, or that may be used to reduce security risk with respect to that machine. While the remedial actions applicable to any respective machine may be (and typically are) specific that that machine, the same types of remedial measures are applicable to many machines. There are many types of security risks affecting machines (e.g., servers, laptops, wireless devices, and other connected devices, etc.), and corresponding types of remedial measures, and thus the following list of types of remedial measures is not exhaustive. Examples of types of remedial measures are:
In some embodiments, to give the administrator a better understanding of how lateral movement could be used to move between the respective machine and other machines, and where a respective remedial action could be performed, a graph can be displayed in the user interface. The graph can be accessed in numerous ways, but for the sake of this example, the administrator can select any one of the following risk factors (D, I, S, P, Vl, Vm, Vh, Lu, Ll, Lm, and Lh) for any one of the machines to bring up a corresponding graph. Depending on the risk factor the administrator has selected, a different graph or portion of a graph may be displayed in the user interface, or the manner of presentation of the graph or the information shown in the graph may differ.
Using the user interface of
An additional optional listing or set of remedial actions 715 can be placed near or next to either the “Group 1” symbol 716 associated with “Group 1” 706, the User 1 symbol 718 associated with the “User 1” 704, or the “Computer 3” symbol 720 associated with “Computer 3” 710. For illustrative purposes this optional listing or set of remedial actions 715 is placed next to the Group 1 symbol 716, associated with “Group 1” 706. Examples of such remedial actions are removing a user from an administrative group, or a subgroup (e.g., a group of users) from an administrative group, so as to reduce opportunities for lateral movement between systems; and adjusting (e.g., reducing) the administrative rights of a respective user or group of users, with respect to either to the respective machine or a machine logically coupled to the respective machine via lateral movement.
Patch Risk Assessment and Remediation
Allowing an administrator to have another different perspective (i.e., viewpoint) of how a security risk could affect the network, allows a user (e.g., an administrator) to better remediate the security risks that are currently or potentially causing the largest amount of comprise to the systems within a distributed system. While the user interface of
While the sorted list based on patch risk information is discussed in a general way, additional details regarding the sorted list will be discussed below and in reference to
While
As briefly discussed earlier, for the patches identified in the patch risk information, a patch risk assessment value can be determined. In some embodiments, a respective patch risk assessment value is a value based on one or more security risks that are present as a result of the missing patch. In one example, as shown in
Patch Risk Reduction Score=(Mu*1+Ml*2+Mm*5+Mh*10)+(Vl*1+Vm*2+Vh*5)
The patch risk factors (sometimes called variables) in these example equations are discussed below. The factors below are contained within the leftmost set of brackets and are summed together to generate a component of the patch risk assessment value. Below is a list of how these patch risk factors are determined and what weighting is associated with them:
In addition to the patch risk factors presented above, the additional patch risk factors discussed below are contained within the rightmost set of brackets and are summed together to generate a component of the patch risk assessment value. This summation of additional patch risk factors in the rightmost set of brackets is then added to the summation of patch risk factors in the leftmost set of brackets. Below is a list of how these additional patch risk factors are determined and what weighting is associated with them:
In some embodiments, the risk path factors used to determine the patch risk assessment values 806 include one or more machine counts, each machine count comprising a number of machines of the N machines missing the respective patch and having a corresponding predefined importance rating; and one or more vulnerability factors, each vulnerability factor corresponding to a count of vulnerabilities, in the set of predefined vulnerabilities, having a corresponding vulnerability rating, and remediated by the respective patch. In some embodiments, the one or more machine counts includes a first machine count comprising a number of machines of the N machines missing the respective patch and having a first predefined machine importance rating, and a second machine count comprising a number of machines of the N machines missing the respective patch and having a second predefined machine importance rating different from the first predefined machine importance rating. In some embodiments, one or more vulnerability factors include a first vulnerability factor corresponding to a first count of vulnerabilities, in the set of predefined vulnerabilities, mitigated by the respective patch and having vulnerability scores in a first score value range, and a second vulnerability factor corresponding to a second count of vulnerabilities, in the set of predefined vulnerabilities, fixed by the respective patch and having vulnerability scores in a second score value range distinct from the first score value range.
After patch risk assessment values 806-1 and 806-2 are determined for multiple patches, the patches are (or can be, in response to a user command) sorted in the patch risk assessment user interface, e.g., shown in
As shown in
After the server system (e.g., in response to a command from the administrator, or by default) sorts the patches based on the patch risk assessment values 810, the patch risk assessment user interface, e.g., of
The remedial action 826 region (e.g., column) included in the patch risk assessment user interface 801 includes one or more remedial action buttons for each patch (e.g., a remedial action button for Patch 1828 and a remedial action button for Patch 2830). While the remedial actions applicable to any respective patch are usually patch specific, the same types of remedial measures are applicable to many patches, and are typically the same types of remedial measures applicable to many vulnerabilities. Additional information associated with the respective patch (option 2, above) is optionally shown in another user interface, such as user interface 832, in response to a user command entered while user interface 801 is displayed. Within this optional user interface 832, a patch report graph 834 is shown that contains, in part, patch information (e.g., listing vulnerabilities corrected by the patch). The user interface 832 also contains one or more user-selectable buttons (“Apply Patch 1” 836 and “Apply Patch 2” 838) for applying corresponding patches to the one or more systems missing those patches.
User Risk Assessment and Remediation
As noted above, allowing an administrator to have different perspectives (i.e., viewpoints) of how a security risk could affect the network, allows a user (e.g., an administrator) to better remediate the security risks that are currently or potentially causing the largest amount of comprise to the systems within a distributed system. While the user interface of
While
While the sorted list based on user administrative access information has been discussed in a general way, additional details regarding the sorted list of users will be discussed below and in reference to
While
As discussed above, some or all of the machines/systems in the distributed system can be assigned an importance rating either automatically or by an administrator. In some embodiments, a machine's rating reflects or is assigned in accordance with the machine's importance within the distributed system. Details about how machines are classified is discussed above. It is worth noting that these categorizations can be subjective, and can be changed based on the needs of the owners or administrators of the distributed system.
As briefly discussed earlier, for the users identified in the administrative rights information, both a direct user risk assessment value, and a derivative user risk assessment value can be determined. A direct user risk assessment value is based on a count of machines to which the user has direct administrative rights to (i.e., access), and is optionally also based on the importance ratings of those machines. A derivative user risk assessment value, is a value (i.e., score) that is based on in part how many systems can be indirectly accessed by the user, using lateral movement, and is optionally also based on the importance ratings of those machines. The derivative user risk assessment value is useful in that it indicates how many systems could reached, or the aggregate importance of systems that could be reached, and compromised by an attacker who gains access to the credentials of a specific user. In one example, as shown in
Direct User Risk Rating=((Du*1+Dl*2+Dm*5+Dh*10)+(lu*1+ll*2+lm*5+lh*10))*S
Derivative User Risk Rating=((Du*1+Dl*2+Dm*5+Dh*10)+(lu*1+ll*2+lm*5+lh*10))*S+(Lu*1+Ll*2+Lm*5++Lh*10)
The risk factors (sometimes called variables) in these example equations are discussed below. Since, the derivative user risk assessment equation 912-1 overlaps with the direct user risk assessment equation 910-1, the direct user risk assessment equation 910-1 will be discussed first. The risk factors included within the first interior brackets (i.e., the leftmost interior bracket) of the direct user risk assessment equation 910-1 are weighted based on the importance of each system to which the user (for which a user risk assessment value is being generated) has direct administrative rights. Below is a list of direct risk factors and information regarding how these direct risk factors are determined and what weighting is associated with them:
In addition, in this example, the weighted summation of the direct risk factors (e.g., Du, Dl, Dm and Dh) representing counts of systems to which the user has direct administrative rights (i.e., what is contained within the leftmost interior bracket of the direct user risk assessment equation 910-1) is added to a weighted summation of indirect risk factors (e.g., lu, ll, lm and lh) representing counts of systems to which the user has indirect administrative rights (i.e., what is contained within the rightmost interior brackets of the direct user risk assessment equation 910-1). Indirect administrative rights are administrative rights are the administrative rights of other users that the user can exploit through membership in one or more administrative groups.
In some embodiments, in addition to the summation of the direct and indirect risk factors, generating the user risk assessment value includes multiplying that summation by the number(S) of open sessions of the respective user. In some embodiments, the number of open sessions 930 are the number of sessions that the user currently has running. In some instances, a single user may have one or more sessions running at the same time (e.g., signed into multiple devices). The resulting value is the direct user risk assessment value 910.
After direct user risk assessment values 910 (e.g., 906-1 and 906-2) are determined for multiple users, the users are (or can be, in response to a user command) sorted in the user interface shown in
Turning to the derivative user risk assessment equation 912-1, the derivative user risk assessment values are calculated by taking the direct user risk assessment equation 910-1 and adding to that score a weighted summation of additional items, sometimes called derivative risk factors or lateral movement values. The determination of such values is discussed some detail elsewhere in this document. The values of the additional items are each multiplied by a respective predetermined or assigned weight in order to weight their importance or contribution to the derivative user risk assessment value for each user for which the derivative user risk assessment value is being generated. It is noted that the additional items listed here are distinct from the additional items shown in
Using the derivative user risk assessment factors (e.g., direct risk factors, indirect risk factors, number of open sessions, and derivative risk factors), user risk assessment values (i.e., direct and derivative risk scores) are determined for each user in a set of users (e.g., which may be a subset of all the users having administrative rights to at least one machine in the distributed system). As shown in
As shown in
After the server system (e.g., in response to a command from the administrator, or by default) sorts the users either based on the direct user risk assessment values 910 or the derivative user risk assessment values 912, the user interface of
The remedial action 940 region (e.g., column) included in the user interface 901 includes one or more remedial action buttons for each user (e.g., a remedial action button for User 1942 and a remedial action button for User 2944). While the remedial actions applicable to any respective user are usually user specific, the same types of remedial measures are applicable to many users. There are many types of security risks affecting users, and corresponding types of remedial measures, and thus the following list of types of remedial measures is not exhaustive. Examples of types of remedial measures are:
To give the administrator a better understanding of how lateral movement could be used to increase the security risks associated with a particular user, and where a respective remedial action could be performed, a graph can be displayed in the user interface, for example as shown in
Using the user interface of
In the user interface 1001, an additional optional listing or set of remedial actions 1015 can be placed near or next to either the “Computer 1” symbol 1016 associated with “Computer 1” 1002, the Group 1 symbol 1018 associated with the “Group 1” 1006, or the Computer 3 symbol 1020 associated with “Computer 3” 1010. For illustrative purposes an optional listing or set of remedial actions 1015 is placed next the Group 1 symbol 1018, associated with “Group 1” 1006.
Examples of Computational Machines and Server System
In some embodiments, input/output interface 1106 includes a display and input devices such as a keyboard, a mouse, the touch-sensitive surface of a touch-screen display, and/or a track-pad. In some embodiments, communication buses 1110 include circuitry (sometimes called a chipset) that interconnects and controls communications between system components. In some embodiments, memory 1104 includes high-speed random-access memory, such as DRAM, SRAM, DDR RAM or other random-access solid-state memory devices; and optionally includes non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid-state storage devices. In some embodiments, memory 1104 includes one or more storage devices remotely located from the one or more processors 1102. In some embodiments, memory 1104, or alternatively the non-volatile memory device(s) within memory 1104, includes a non-transitory computer readable storage medium.
In some embodiments, memory 1104 or alternatively the non-transitory computer readable storage medium of memory 1104 stores the following programs, modules and data structures, instructions, or a subset thereof:
In some embodiments, message and command module 1120, optionally working in conjunction with patch module 1130, obtains information needed by a respective server system, such as server 110,
In some embodiments, input/output interface 1206 includes a display and input devices such as a keyboard, a mouse, the touch-sensitive surface of a touch-screen display, and/or a track-pad. In some embodiments, communication buses 1210 include circuitry (sometimes called a chipset) that interconnects and controls communications between system components. In some embodiments, memory 1204 includes high-speed random-access memory, such as DRAM, SRAM, DDR RAM or other random-access solid-state memory devices; and optionally includes non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid-state storage devices. In some embodiments, memory 1204 includes one or more storage devices remotely located from the one or more processors 1202. In some embodiments, memory 1204, or alternatively the non-volatile memory device(s) within memory 1204, includes a non-transitory computer readable storage medium.
In some embodiments, memory 1204 or alternatively the non-transitory computer readable storage medium of memory 1204 stores the following programs, modules and data structures, instructions, or a subset thereof:
In some embodiments, input/output interface 1306 includes a display and input devices such as a keyboard, a mouse, the touch-sensitive surface of a touch-screen display, or a track-pad. In some embodiments, communication buses 1310 include circuitry (sometimes called a chipset) that interconnects and controls communications between system components. In some embodiments, memory 1304 includes high-speed random-access memory, such as DRAM, SRAM, DDR RAM or other random-access solid-state memory devices; and optionally includes non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid-state storage devices. In some embodiments, memory 1304 includes one or more storage devices remotely located from the one or more processors 1302. In some embodiments, memory 1304, or alternatively the non-volatile memory device(s) within memory 1304, comprises a non-transitory computer readable storage medium.
In some embodiments, memory 1304 or alternatively the non-transitory computer readable storage medium of memory 1304 stores the following programs, modules and data structures, instructions, or a subset thereof:
The foregoing description, for purpose of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The embodiments were chosen and described in order to best explain the principles of the invention and its practical applications, to thereby enable others skilled in the art to best utilize the invention and various embodiments with various modifications as are suited to the particular use contemplated.
This application is a continuation of U.S. application Ser. No. 16/952,009, filed Nov. 18, 2020, which claims priority to U.S. Provisional Patent Application No. 62/937,125, filed Nov. 18, 2019, which is hereby incorporated by reference in its entirety. This application is also related to U.S. patent application Ser. No. 13/797,946, filed Mar. 12, 2013, now U.S. Pat. No. 9,246,977; U.S. patent application Ser. No. 14/554,711, filed Nov. 26, 2014, now U.S. Pat. No. 9,667,738; and U.S. patent application Ser. No. 14/554,739, filed Nov. 26, 2014, now U.S. Pat. No. 9,769,275, all of which are hereby incorporated by reference in their entireties.
| Number | Name | Date | Kind |
|---|---|---|---|
| 5220596 | Patel | Jun 1993 | A |
| 5842202 | Kon | Nov 1998 | A |
| 5949755 | Upadhya et al. | Sep 1999 | A |
| 6049828 | Dev et al. | Apr 2000 | A |
| 6226493 | Leopold et al. | May 2001 | B1 |
| 6615213 | Johnson | Sep 2003 | B1 |
| 6879979 | Hindawi et al. | Apr 2005 | B2 |
| 6885644 | Knop et al. | Apr 2005 | B1 |
| 6959000 | Lee | Oct 2005 | B1 |
| 7043550 | Knop et al. | May 2006 | B2 |
| 7096503 | Magdych et al. | Aug 2006 | B1 |
| 7120693 | Chang et al. | Oct 2006 | B2 |
| 7225243 | Wilson | May 2007 | B1 |
| 7240044 | Chaudhuri et al. | Jul 2007 | B2 |
| 7299047 | Dolan et al. | Nov 2007 | B2 |
| 7483430 | Yuan et al. | Jan 2009 | B1 |
| 7555545 | McCasland | Jun 2009 | B2 |
| 7600018 | Maekawa et al. | Oct 2009 | B2 |
| 7698453 | Samuels et al. | Apr 2010 | B2 |
| 7720641 | Alagappan et al. | May 2010 | B2 |
| 7761557 | Fellenstein et al. | Jul 2010 | B2 |
| 7769848 | Choy et al. | Aug 2010 | B2 |
| 7844687 | Gelvin et al. | Nov 2010 | B1 |
| 8078668 | Moreau | Dec 2011 | B2 |
| 8086729 | Hindawi et al. | Dec 2011 | B1 |
| 8139508 | Roskind | Mar 2012 | B1 |
| 8185612 | Arolovitch et al. | May 2012 | B1 |
| 8185615 | McDysan et al. | May 2012 | B1 |
| 8271522 | Mehul et al. | Sep 2012 | B2 |
| 8392530 | Manapragada et al. | Mar 2013 | B1 |
| 8477660 | Lee et al. | Jul 2013 | B2 |
| 8504879 | Poletto et al. | Aug 2013 | B2 |
| 8510562 | Ramakrishnan et al. | Aug 2013 | B2 |
| 8650160 | Beatty et al. | Feb 2014 | B1 |
| 8677448 | Kauffman | Mar 2014 | B1 |
| 8813228 | Magee et al. | Aug 2014 | B2 |
| 8819769 | Van et al. | Aug 2014 | B1 |
| 8885521 | Wang et al. | Nov 2014 | B2 |
| 8903973 | Hindawi et al. | Dec 2014 | B1 |
| 8904039 | Hindawi et al. | Dec 2014 | B1 |
| 8972566 | Hindawi et al. | Mar 2015 | B1 |
| 9009827 | Albertson et al. | Apr 2015 | B1 |
| 9059961 | Hindawi et al. | Jun 2015 | B2 |
| 9104794 | Zakonov et al. | Aug 2015 | B2 |
| 9246977 | Hindawi et al. | Jan 2016 | B2 |
| 9576131 | Tuvell et al. | Feb 2017 | B2 |
| 9609007 | Rivlin et al. | Mar 2017 | B1 |
| 9667738 | Hindawi et al. | May 2017 | B2 |
| 9716649 | Bent et al. | Jul 2017 | B2 |
| 9729429 | Hindawi et al. | Aug 2017 | B2 |
| 9769037 | Hindawi et al. | Sep 2017 | B2 |
| 9769275 | Hindawi et al. | Sep 2017 | B2 |
| 9800603 | Sidagni | Oct 2017 | B1 |
| 9910752 | Lippincott et al. | Mar 2018 | B2 |
| 9973525 | Roturier | May 2018 | B1 |
| 9985982 | Bartos et al. | May 2018 | B1 |
| 9998955 | MacCarthaigh | Jun 2018 | B1 |
| 10015185 | Kolman | Jul 2018 | B1 |
| 10095864 | Hunt et al. | Oct 2018 | B2 |
| 10111208 | Hindawi et al. | Oct 2018 | B2 |
| 10136415 | Hindawi et al. | Nov 2018 | B2 |
| 10148536 | Hindawi et al. | Dec 2018 | B2 |
| 10261770 | Devagupthapu et al. | Apr 2019 | B2 |
| 10372904 | Hunt et al. | Aug 2019 | B2 |
| 10412188 | Hindawi et al. | Sep 2019 | B2 |
| 10482242 | Hunt et al. | Nov 2019 | B2 |
| 10484429 | Fawcett et al. | Nov 2019 | B1 |
| 10498744 | Hunt et al. | Dec 2019 | B2 |
| 10649870 | Lippincott et al. | May 2020 | B1 |
| 10674486 | Hindawi et al. | Jun 2020 | B2 |
| 10708116 | Hindawi et al. | Jul 2020 | B2 |
| 10795906 | Teubner | Oct 2020 | B1 |
| 10824729 | Hoscheit et al. | Nov 2020 | B2 |
| 10841365 | White et al. | Nov 2020 | B2 |
| 10873645 | Freilich et al. | Dec 2020 | B2 |
| 10929345 | Stoddard et al. | Feb 2021 | B2 |
| 11032298 | Robbins et al. | Jun 2021 | B1 |
| 11100199 | Subramaniam | Aug 2021 | B2 |
| 11151246 | Davis | Oct 2021 | B2 |
| 11153383 | Richards et al. | Oct 2021 | B2 |
| 11172470 | Guieu et al. | Nov 2021 | B1 |
| 11258654 | Hindawi et al. | Feb 2022 | B1 |
| 11277489 | Freilich et al. | Mar 2022 | B2 |
| 11301568 | Dargude | Apr 2022 | B1 |
| 11343355 | Goela et al. | May 2022 | B1 |
| 11372938 | Stoddard et al. | Jun 2022 | B1 |
| 11461208 | Lippincott et al. | Oct 2022 | B1 |
| 11563764 | Hoscheit et al. | Jan 2023 | B1 |
| 11609835 | Varga et al. | Mar 2023 | B1 |
| 11700303 | Richards et al. | Jul 2023 | B1 |
| 11711810 | Guieu et al. | Jul 2023 | B1 |
| 11777981 | Hoscheit et al. | Oct 2023 | B1 |
| 11809294 | Lippincott et al. | Nov 2023 | B1 |
| 11831670 | Molls | Nov 2023 | B1 |
| 11886229 | Goela et al. | Jan 2024 | B1 |
| 11914495 | Varga et al. | Feb 2024 | B1 |
| 11956335 | Goela et al. | Apr 2024 | B1 |
| 20010056461 | Kampe et al. | Dec 2001 | A1 |
| 20020007404 | Vange et al. | Jan 2002 | A1 |
| 20020042693 | Kampe et al. | Apr 2002 | A1 |
| 20020073086 | Thompson et al. | Jun 2002 | A1 |
| 20020099952 | Lambert et al. | Jul 2002 | A1 |
| 20020198867 | Lohman et al. | Dec 2002 | A1 |
| 20030101253 | Saito et al. | May 2003 | A1 |
| 20030120603 | Kojima et al. | Jun 2003 | A1 |
| 20030131044 | Nagendra et al. | Jul 2003 | A1 |
| 20030212676 | Bruce et al. | Nov 2003 | A1 |
| 20030212821 | Gillies et al. | Nov 2003 | A1 |
| 20040037374 | Gonikberg | Feb 2004 | A1 |
| 20040044727 | Abdelaziz et al. | Mar 2004 | A1 |
| 20040044790 | Loach et al. | Mar 2004 | A1 |
| 20040054723 | Dayal et al. | Mar 2004 | A1 |
| 20040054889 | Pitsos | Mar 2004 | A1 |
| 20040064522 | Zhang et al. | Apr 2004 | A1 |
| 20040076164 | Vanderveen et al. | Apr 2004 | A1 |
| 20040190085 | Silverbrook et al. | Sep 2004 | A1 |
| 20050004907 | Bruno et al. | Jan 2005 | A1 |
| 20050053000 | Oliver et al. | Mar 2005 | A1 |
| 20050108356 | Rosu et al. | May 2005 | A1 |
| 20050108389 | Kempin et al. | May 2005 | A1 |
| 20050195755 | Senta et al. | Sep 2005 | A1 |
| 20060039371 | Castro et al. | Feb 2006 | A1 |
| 20060128406 | Macartney | Jun 2006 | A1 |
| 20060282505 | Hasha et al. | Dec 2006 | A1 |
| 20070005738 | Alexion-Tiernan et al. | Jan 2007 | A1 |
| 20070171844 | Loyd et al. | Jul 2007 | A1 |
| 20070211651 | Ahmed et al. | Sep 2007 | A1 |
| 20070230482 | Shim et al. | Oct 2007 | A1 |
| 20070261051 | Porter et al. | Nov 2007 | A1 |
| 20080082628 | Rowstron et al. | Apr 2008 | A1 |
| 20080133582 | Andersch et al. | Jun 2008 | A1 |
| 20080258880 | Smith et al. | Oct 2008 | A1 |
| 20080263031 | George et al. | Oct 2008 | A1 |
| 20080288646 | Hasha et al. | Nov 2008 | A1 |
| 20090125639 | Dam et al. | May 2009 | A1 |
| 20090271360 | Bestgen et al. | Oct 2009 | A1 |
| 20090285204 | Gallant et al. | Nov 2009 | A1 |
| 20090319503 | Mehul et al. | Dec 2009 | A1 |
| 20090328115 | Malik | Dec 2009 | A1 |
| 20100011060 | Hilterbrand et al. | Jan 2010 | A1 |
| 20100070570 | Lepeska | Mar 2010 | A1 |
| 20100085948 | Yu et al. | Apr 2010 | A1 |
| 20100094862 | Bent et al. | Apr 2010 | A1 |
| 20100154026 | Chatterjee et al. | Jun 2010 | A1 |
| 20100296416 | Lee et al. | Nov 2010 | A1 |
| 20100306252 | Jarvis et al. | Dec 2010 | A1 |
| 20110099562 | Nandy et al. | Apr 2011 | A1 |
| 20110231431 | Kamiwada et al. | Sep 2011 | A1 |
| 20110271319 | Venable, Sr. | Nov 2011 | A1 |
| 20110299455 | Ordentlich et al. | Dec 2011 | A1 |
| 20120053957 | Atkins et al. | Mar 2012 | A1 |
| 20120110183 | Miranda et al. | May 2012 | A1 |
| 20120221692 | Steiner et al. | Aug 2012 | A1 |
| 20120269096 | Roskind | Oct 2012 | A1 |
| 20120330700 | Garg et al. | Dec 2012 | A1 |
| 20130110931 | Kim et al. | May 2013 | A1 |
| 20130170336 | Chen et al. | Jul 2013 | A1 |
| 20130212296 | Goel et al. | Aug 2013 | A1 |
| 20130276053 | Hugard et al. | Oct 2013 | A1 |
| 20130326494 | Nunez | Dec 2013 | A1 |
| 20140075505 | Subramanian | Mar 2014 | A1 |
| 20140101133 | Carston et al. | Apr 2014 | A1 |
| 20140149557 | Lohmar et al. | May 2014 | A1 |
| 20140164290 | Salter | Jun 2014 | A1 |
| 20140164552 | Kim et al. | Jun 2014 | A1 |
| 20140181247 | Hindawi et al. | Jun 2014 | A1 |
| 20140181295 | Hindawi et al. | Jun 2014 | A1 |
| 20140244727 | Kang et al. | Aug 2014 | A1 |
| 20140279044 | Summers et al. | Sep 2014 | A1 |
| 20140280280 | Singh | Sep 2014 | A1 |
| 20140282586 | Shear et al. | Sep 2014 | A1 |
| 20140372533 | Fu et al. | Dec 2014 | A1 |
| 20140375528 | Ling | Dec 2014 | A1 |
| 20150080039 | Ling et al. | Mar 2015 | A1 |
| 20150149624 | Hindawi et al. | May 2015 | A1 |
| 20150163121 | Mahaffey et al. | Jun 2015 | A1 |
| 20150172228 | Zalepa et al. | Jun 2015 | A1 |
| 20150199511 | Faile, Jr. | Jul 2015 | A1 |
| 20150199629 | Faile, Jr. | Jul 2015 | A1 |
| 20150256575 | Scott | Sep 2015 | A1 |
| 20150302458 | Dides et al. | Oct 2015 | A1 |
| 20150312335 | Ying et al. | Oct 2015 | A1 |
| 20150372911 | Yabusaki et al. | Dec 2015 | A1 |
| 20150373043 | Wang et al. | Dec 2015 | A1 |
| 20150378743 | Zellermayer et al. | Dec 2015 | A1 |
| 20160034692 | Singler | Feb 2016 | A1 |
| 20160080408 | Coleman et al. | Mar 2016 | A1 |
| 20160119251 | Solis et al. | Apr 2016 | A1 |
| 20160255142 | Hunt et al. | Sep 2016 | A1 |
| 20160255143 | Hunt et al. | Sep 2016 | A1 |
| 20160269434 | Divalentin et al. | Sep 2016 | A1 |
| 20160286540 | Hindawi et al. | Sep 2016 | A1 |
| 20160352588 | Subbarayan et al. | Dec 2016 | A1 |
| 20160360006 | Hopkins et al. | Dec 2016 | A1 |
| 20160378450 | Fu et al. | Dec 2016 | A1 |
| 20170093915 | Ellis et al. | Mar 2017 | A1 |
| 20170118074 | Feinstein et al. | Apr 2017 | A1 |
| 20170133843 | McNeill-McCallum et al. | May 2017 | A1 |
| 20170257432 | Fu et al. | Sep 2017 | A1 |
| 20170286690 | Chari et al. | Oct 2017 | A1 |
| 20170346824 | Mahabir et al. | Nov 2017 | A1 |
| 20180013768 | Hunt et al. | Jan 2018 | A1 |
| 20180039486 | Kulkarni et al. | Feb 2018 | A1 |
| 20180074483 | Cruz | Mar 2018 | A1 |
| 20180074796 | Alabes et al. | Mar 2018 | A1 |
| 20180191747 | Nachenberg et al. | Jul 2018 | A1 |
| 20180191766 | Holeman et al. | Jul 2018 | A1 |
| 20180267794 | Atchison et al. | Sep 2018 | A1 |
| 20180351792 | Hunter et al. | Dec 2018 | A1 |
| 20180351793 | Hunter et al. | Dec 2018 | A1 |
| 20180375892 | Ganor | Dec 2018 | A1 |
| 20190081981 | Bansal | Mar 2019 | A1 |
| 20190096217 | Pourmohammad | Mar 2019 | A1 |
| 20190138512 | Pourmohammad | May 2019 | A1 |
| 20190260638 | Yocam et al. | Aug 2019 | A1 |
| 20190280867 | Kurian | Sep 2019 | A1 |
| 20190319987 | Levy et al. | Oct 2019 | A1 |
| 20190361843 | Stoddard et al. | Nov 2019 | A1 |
| 20200028890 | White et al. | Jan 2020 | A1 |
| 20200053072 | Glozman et al. | Feb 2020 | A1 |
| 20200195693 | Price et al. | Jun 2020 | A1 |
| 20200198867 | Nakamichi | Jun 2020 | A1 |
| 20200202007 | Nagaraja | Jun 2020 | A1 |
| 20200304536 | Mahabir et al. | Sep 2020 | A1 |
| 20210027401 | Hovhannisyan et al. | Jan 2021 | A1 |
| 20210218711 | Biran et al. | Jul 2021 | A1 |
| 20230036694 | Coughlan | Feb 2023 | A1 |
| 20230360040 | Childe et al. | Nov 2023 | A1 |
| Number | Date | Country |
|---|---|---|
| 1553747 | Jul 2005 | EP |
| 2493118 | Aug 2012 | EP |
| Entry |
|---|
| Notice of Allowance, U.S. Appl. No. 15/930,342, Mar. 24, 2022, 8 pages. |
| Notice of Allowance, U.S. Appl. No. 15/930,342, May 25, 2022, 9 pages. |
| Notice of Allowance, U.S. App. No. 16/033, 131, Jun. 30, 2020, 27 pages. |
| Notice of Allowance, U.S. Appl. No. 16/194,240, Aug. 14, 2019, 10 pages. |
| Notice of Allowance, U.S. Appl. No. 16/194,240, Mar. 2, 2020, 9 pages. |
| Notice of Allowance, U.S. Appl. No. 16/194,240, Nov. 7, 2019, 10 pages. |
| Notice of Allowance, U.S. Appl. No. 16/430,336, Aug. 7, 2020, 28 pages. |
| Notice of Allowance, U.S. Appl. No. 16/430,336, Sep. 3, 2020, 5 pages. |
| Notice of Allowance, U.S. Appl. No. 16/443,720, Feb. 9, 2021, 8 pages. |
| Notice of Allowance, U.S. Appl. No. 16/443,720, Jun. 15, 2021, 7 pages. |
| Notice of Allowance, U.S. Appl. No. 16/532,391, Oct. 21, 2020, 10 pages. |
| Notice of Allowance, U.S. Appl. No. 16/565,247, Aug. 17, 2020, 19 pages. |
| Notice of Allowance, U.S. Appl. No. 16/854,844, Jul. 6, 2021, 16 pages. |
| Notice of Allowance, U.S. Appl. No. 16/870,742, Mar. 7, 2022, 10 pages. |
| Notice of Allowance, U.S. Appl. No. 16/917,800, Oct. 15, 2021, 7 pages. |
| Notice of Allowance, U.S. Appl. No. 16/943,291, Jan. 27, 2022, 12 pages. |
| Notice of Allowance, U.S. Appl. No. 16/943,291, Oct. 1, 2021, 11 pages. |
| Notice of Allowance, U.S. Appl. No. 16/943,307, Nov. 8, 2022, 11 pages. |
| Notice of Allowance, U.S. Appl. No. 16/943,307, Oct. 6, 2022, 12 pages. |
| Notice of Allowance, U.S. Appl. No. 16/952,009, Jul. 25, 2023, 8 pages. |
| Notice of Allowance, U.S. Appl. No. 16/952,009, Mar. 28, 2023, 8 pages. |
| Notice of Allowance, U.S. Appl. No. 17/001,586, Sep. 8, 2022, 8 pages. |
| Notice of Allowance, U.S. Appl. No. 17/129,638, Nov. 4, 2021, 9 pages. |
| Notice of Allowance, U.S. Appl. No. 17/182,083, Sep. 20, 2023, 8 pages. |
| Notice of Allowance, U.S. Appl. No. 17/503,023, Feb. 24, 2023, 8 pages. |
| Notice of Allowance, U.S. Appl. No. 17/521,686, Mar. 3, 2023, 8 pages. |
| Notice of Allowance, U.S. Appl. No. 17/751,504, Nov. 21, 2023, 12 pages. |
| Notice of Allowance, U.S. Appl. No. 17/959,177, Jun. 21, 2023, 9 pages. |
| Notice of Allowance, U.S. Appl. No. 17/959,177, Mar. 28, 2023, 9 pages. |
| Notice of Allowance, U.S. Appl. No. 18/099,854, May 26, 2023, 12 pages. |
| Notice of Allowance, U.S. Appl. No. 18/123,930, Nov. 7, 2023, 11 pages. |
| Notice of Allowance, U.S. Appl. No. 18/204,351, Jun. 21, 2024, 10 pages. |
| Notice of Allowance, U.S. Appl. No. 18/225,620, Jul. 18, 2024, 7 pages. |
| Peter Kairouz et al., “Advances and Open Problems in Federated Learning,” 2021, 121 pages, arXIV:1912.04977v3. |
| Ping Wang et al., “Peer-to-Peer Botnets: The Next Generation of Botnet Attacks”, Jan. 2010, pp. 1-25 (Year: 2010). |
| Requirement for Restriction/Election, U.S. Appl. No. 12/412,623, Nov. 22, 2010, 5 pages. |
| Requirement for Restriction/Election, U.S. Appl. No. 13/107,625, Oct. 11, 2013, 6 pages. |
| Sean Rhea et al., “Handling Churn in a DHT”, 2004, pp. 1-14 (Year: 2004). |
| Stoica et al., “Chord: A Scalable Peer-to-Peer Lookup Service for Internet Applications”, 2001, pp. 1-12 (Year: 2002). |
| Supplemental Notice of Allowability, U.S. Appl. No. 16/443,720, Aug. 4, 2021, 2 pages. |
| Supplemental Notice of Allowability, U.S. Appl. No. 16/870,742, Apr. 11, 2022, 2 pages. |
| Trevor Hastie et al., “The Elements of Statistical Learning, Data Mining, Inference, and Prediction,” 2001, 545 pages, Springer. |
| Trevor Hastie et al., “The Elements of Statistical Learning, Data Mining, Inference, and Prediction,” 2008, 764 pages, Second Edition, Springer. |
| Weixiong Rao et al., “Optimal Resource Placement in Structured Peer-to-Peer Networks,” IEEE Transactions on Parallel and Distributed Systems, vol. 21, No. 7, Jul. 2010, 16 pgs. |
| “Total Carbon Accounting: A Framework to Deliver Locational Carbon Intensity Data”, White Paper, Nov. 2021, 29 pages. |
| Abdalkarim Awad et al., “Virtual Cord Protocol (VCP): A Flexible DHT-like Routing Service for Sensor Networks”, In Proceedings of the 5th IEEE International Conference on Mobile Ad Hoc and Sensor Systems, 2008, 10 DQ:S. 133-142. |
| Corrected Notice of Allowability, U.S. Appl. No. 15/004,757, Aug. 24, 2018, 4 pages. |
| Corrected Notice of Allowability, U.S. Appl. No. 15/174,850, Jul. 25, 2018, 37 pages. |
| Corrected Notice of Allowability, U.S. Appl. No. 16/194,240, Mar. 31, 2020, 6 pages. |
| Corrected Notice of Allowability, U.S. Appl. No. 16/430,336, Oct. 15, 2020, 2 pages. |
| Corrected Notice of Allowability, U.S. Appl. No. 16/565,247, Oct. 15, 2020, 10 pages. |
| Corrected Notice of Allowability, U.S. Appl. No. 16/917,800, Dec. 16, 2021, 2 pages. |
| Corrected Notice of Allowability, U.S. Appl. No. 16/917,800, Nov. 18, 2021, 2 pages. |
| Corrected Notice of Allowability, U.S. Appl. No. 16/917,800, Oct. 25, 2021, 2 pages. |
| Corrected Notice of Allowability, U.S. Appl. No. 16/943,291, Apr. 11, 2022, 2 pages. |
| Corrected Notice of Allowability, U.S. Appl. No. 16/943,291, Feb. 25, 2022, 2 pages. |
| Corrected Notice of Allowability, U.S. Appl. No. 16/943,291, Oct. 18, 2021, 5 pages. |
| Corrected Notice of Allowability, U.S. Appl. No. 18/099,854, Jun. 5, 2023, 7 pages. |
| Final Office Action, U.S. Appl. No. 13/084,923, Jul. 1, 2013, 10 pages. |
| Final Office Action, U.S. Appl. No. 15/004,757, Dec. 29, 2017, 27 pages. |
| Final Office Action, U.S. Appl. No. 15/215,474, Apr. 1, 2019, 7 pages. |
| Final Office Action, U.S. Appl. No. 15/668,665, Dec. 10, 2019, 13 pages. |
| Final Office Action, U.S. Appl. No. 15/702,617, Dec. 27, 2018, 54 pages. |
| Final Office Action, U.S. Appl. No. 16/952,009, Dec. 13, 2022, 9 pages. |
| Final Office Action, U.S. Appl. No. 18/196,980, Mar. 11, 2024, 22 pages. |
| H. Brendan McMahan et al., “Communication-Efficient Learning of Deep Networks from Decentralized Data,” 2017, 10 pages, Proceedings of the 20th International Conference on Artificial Intelligence and Statistics (AISTATS), Fort Lauderdale, Florida, USA. |
| Hood, Cynthhia S., Proactive Network-Fault Detection, Sep. 1997, IEEE Transactions on Reliability, vol. 46, No. 3, pp. 333-341. |
| Ian Goodfellow et al., “Deep Learning,” 2016, 798 pages, MIT Press. |
| International Preliminary Report on Patentability, PCT App. No. PCT/US2013/076971, Jul. 2, 2015, 15 pages. |
| International Preliminary Report on Patentability, PCT App. No. PCT/US2014/067607, Jun. 9, 2016, 11 pages. |
| International Preliminary Report on Patentability, PCT App. No. PCT/US2015/020780, Oct. 6, 2016, 10 pages. |
| International Search Report and Written Opinion, PCT App. No. PCT/US2013/076971, Apr. 4, 2014, 17 pages. |
| International Search Report and Written Opinion, PCT App. No. PCT/US2014/067607, Feb. 18, 2015, 13 pages. |
| International Search Report and Written Opinion, PCT App. No. PCT/US2015/020780, Jul. 2, 2015, 13 pages. |
| IT Services, “Environmental impact of IT: desktops, laptops and screens”, How we are reducing IT waste, and steps you can take to reduce your carbon footprint, available online at <https://www.it.ox.ac.uk/article/environment-and-it>, Apr. 13, 2022, 5 pages. |
| Jae Woo Lee et al., “0 to 10k in 20 Seconds: Bootstrapping Large-Scale DHT Networks”, 2011 IEEE International Conference on Communications, Jun. 9, 2011, pp. 1-6. |
| Justin Sutton-Parker, “Can analytics software measure end user computing electricity consumption?”, Springer, May 5, 2022, 19 pages. |
| Justin Sutton-Parker, “Determining commuting greenhouse gas emissions abatement achieved by information technology enabled remote working”, The 11th International Conference on Sustainable Energy Information Technology (SEIT), Aug. 9-12, 2021, 9 pages. |
| Justin Sutton-Parker, Determining end user computing device Scope 2 GHG emissions with accurate use phase energy consumption measurement, The 10th International Conference on Sustainable Energy Information Technology (SEIT), Aug. 9-12, 2020, pp. 484-491. |
| Justin Sutton-Parker, “Quantifying greenhouse gas abatement delivered by alternative computer operating system displacement strategies”, The 12th International Conference on Sustainable Energy Information Technology, Aug. 9-11, 2022, pp. 1-10. |
| Mongeau et al., “Ensuring integrity of network inventory and configuration data”, Telecommunications Network Strategy and Planning Symposium, Networks 2004, 11th International Vienna, Austria, Jun. 13-16, 2004, 6 pgs. |
| Non-Final Office Action, U.S. Appl. No. 12/412,623, Mar. 7, 2011, 10 pages. |
| Non-Final Office Action, U.S. Appl. No. 13/084,923, Dec. 9, 2013, 13 pages. |
| Non-Final Office Action, U.S. Appl. No. 13/084,923, Feb. 14, 2013, 8 pages. |
| Non-Final Office Action, U.S. Appl. No. 13/107,625, Jan. 14, 2014, 9 pages. |
| Non-Final Office Action, U.S. Appl. No. 13/301,250, Jun. 26, 2013, 11 pages. |
| Non-Final Office Action, U.S. Appl. No. 13/797,946, Feb. 27, 2015, 18 pages. |
| Non-Final Office Action, U.S. Appl. No. 14/530,601, Nov. 10, 2016, 8 pages. |
| Non-Final Office Action, U.S. Appl. No. 14/553,769, Feb. 9, 2017, 16 pages. |
| Non-Final Office Action, U.S. Appl. No. 14/554,711, Jul. 29, 2016, 23 pages. |
| Non-Final Office Action, U.S. Appl. No. 14/554,739, Aug. 26, 2016, 30 pages. |
| Non-Final Office Action, U.S. Appl. No. 15/004,757, Jun. 21, 2017, 23 pages. |
| Non-Final Office Action, U.S. Appl. No. 15/004,757, Mar. 9, 2018, 57 pages. |
| Non-Final Office Action, U.S. Appl. No. 15/215,468, Oct. 4, 2018, 13 pages. |
| Non-Final Office Action, U.S. Appl. No. 15/215,474, Sep. 10, 2018, 10 pages. |
| Non-Final Office Action, U.S. Appl. No. 15/668,665, Aug. 7, 2019, 11 pages. |
| Non-Final Office Action, U.S. Appl. No. 15/702,617, Jun. 1, 2018, 37 pages. |
| Non-Final Office Action, U.S. Appl. No. 16/443,720, Sep. 4, 2020, 13 pages. |
| Non-Final Office Action, U.S. Appl. No. 16/870,742, Oct. 28, 2021, 7 pages. |
| Non-Final Office Action, U.S. Appl. No. 16/917,800, Jul. 1, 2021, 7 pages. |
| Non-Final Office Action, U.S. Appl. No. 16/943,291, Jul. 16, 2021, 19 pages. |
| Non-Final Office Action, U.S. Appl. No. 16/943,307, Apr. 27, 2022, 7 pages. |
| Non-Final Office Action, U.S. Appl. No. 16/952,009, Aug. 1, 2022, 8 pages. |
| Non-Final Office Action, U.S. Appl. No. 17/001,586, Jun. 9, 2022, 7 pages. |
| Non-Final Office Action, U.S. Appl. No. 17/129,638, Jul. 23, 2021, 7 pages. |
| Non-Final Office Action, U.S. Appl. No. 17/182,083, Apr. 27, 2023, 7 pages. |
| Non-Final Office Action, U.S. Appl. No. 17/503,023, Nov. 25, 2022, 7 pages. |
| Non-Final Office Action, U.S. Appl. No. 17/521,686, Oct. 4, 2022, 38 pages. |
| Non-Final Office Action, U.S. Appl. No. 17/732,402, May 21, 2024, 20 pages. |
| Non-Final Office Action, U.S. Appl. No. 17/751,504, Jun. 9, 2023, 31 pages. |
| Non-Final Office Action, U.S. Appl. No. 17/856,787, Apr. 11, 2024, 21 pages. |
| Non-Final Office Action, U.S. Appl. No. 18/123,930, Jul. 14, 2023, 7 pages. |
| Non-Final Office Action, U.S. Appl. No. 18/196,980, Sep. 8, 2023, 17 pages. |
| Non-Final Office Action, U.S. Appl. No. 18/204,351, Jan. 5, 2024, 8 pages. |
| Non-Final Office Action, U.S. Appl. No. 18/225,620, Mar. 14, 2024, 14 pages. |
| Non-Final Office Action, U.S. Appl. No. 18/374,621, Aug. 16, 2024, 9 pages. |
| Notice of Allowability, U.S. Appl. No. 17/751,504, Dec. 18, 2023, 11 pages. |
| Notice of Allowance, U.S. Appl. No. 12/412,623, Oct. 5, 2011, 5 pages. |
| Notice of Allowance, U.S. Appl. No. 13/084,923, Jul. 30, 2014, 7 pages. |
| Notice of Allowance, U.S. Appl. No. 13/107,625, Apr. 23, 2014, 7 pages. |
| Notice of Allowance, U.S. Appl. No. 13/107,625, Oct. 22, 2014, 7 pages. |
| Notice of Allowance, U.S. Appl. No. 13/301,250, Jan. 21, 2014, 10 pages. |
| Notice of Allowance, U.S. Appl. No. 13/301,250, Oct. 24, 2014, 8 pages. |
| Notice of Allowance, U.S. Appl. No. 13/797,946, Sep. 11, 2015, 18 pages. |
| Notice of Allowance, U.S. Appl. No. 13/797,962, Feb. 17, 2015, 10 pages. |
| Notice of Allowance, U.S. Appl. No. 14/530,601, Apr. 5, 2017, 8 pages. |
| Notice of Allowance, U.S. Appl. No. 14/553,769, May 19, 2017, 6 pages. |
| Notice of Allowance, U.S. Appl. No. 14/554,711, Jan. 27, 2017, 22 pages. |
| Notice of Allowance, U.S. Appl. No. 14/554,739, May 9, 2017, 20 pages. |
| Notice of Allowance, U.S. Appl. No. 15/004,757, Jul. 16, 2018, 7 pages. |
| Notice of Allowance, U.S. Appl. No. 15/136,790, Nov. 20, 2017, 11 pages. |
| Notice of Allowance, U.S. Appl. No. 15/174,850, Jun. 20, 2018, 39 pages. |
| Notice of Allowance, U.S. Appl. No. 15/215,468, Apr. 1, 2019, 8 pages. |
| Notice of Allowance, U.S. Appl. No. 15/215,468, Jan. 24, 2019, 8 pages. |
| Notice of Allowance, U.S. Appl. No. 15/215,474, Jul. 11, 2019, 9 pages. |
| Notice of Allowance, U.S. Appl. No. 15/215,483, Jun. 7, 2018, 9 pages. |
| Notice of Allowance, U.S. Appl. No. 15/668,665, Mar. 2, 2020, 7 pages. |
| Notice of Allowance, U.S. Appl. No. 15/686,054, Jul. 18, 2018, 6 pages. |
| Notice of Allowance, U.S. Appl. No. 15/702,617, Apr. 23, 2019, 24 pages. |
| Notice of Allowance, U.S. Appl. No. 15/713,518, Apr. 10, 2019, 14 pages. |
| Notice of Allowance, U.S. Appl. No. 15/713,518, Jul. 29, 2019, 13 pages. |
| Notice of Allowance, U.S. Appl. No. 15/878,286, Apr. 25, 2019, 11 pages. |
| Notice of Allowance, U.S. Appl. No. 15/878,286, Jan. 10, 2020, 6 pages. |
| Notice of Allowance, U.S. Appl. No. 15/878,286, Jul. 31, 2019, 5 pages. |
| Corrected Notice of Allowability, U.S. Appl. No. 17/683,213, Oct. 7, 2024, 2 pages. |
| Corrected Notice of Allowability, U.S. Appl. No. 17/683,213, Sep. 26, 2024, 2 pages. |
| Corrected Notice of Allowability, U.S. Appl. No. 18/225,620, Oct. 15, 2024, 2 pages. |
| Non-Final Office Action, U.S. Appl. No. 18/196,980, Sep. 19, 2024, 24 pages. |
| Non-Final Office Action, U.S. Appl. No. 18/440,922, Sep. 5, 2024, 14 pages. |
| Notice of Allowance, U.S. Appl. No. 17/683,213, Sep. 16, 2024, 18 pages. |
| Supplemental Notice of Allowability, U.S. Appl. No. 18/204,351, Oct. 1, 2024, 2 pages. |
| Notice of Allowance, U.S. Appl. No. 18/440,922, Oct. 29, 2024, 11 pages. |
| Number | Date | Country | |
|---|---|---|---|
| 62937125 | Nov 2019 | US |
| Number | Date | Country | |
|---|---|---|---|
| Parent | 16952009 | Nov 2020 | US |
| Child | 18516882 | US |