The present disclosure relates to computer systems and methods for replicating a portion of a data set to a local repository. In particular, the present disclosure pertains to computer systems and methods for replicating a portion of a data set to a local repository associated with a subnetwork, the data set being stored on a central repository and associated with one or more subnetworks and the portion of the data set being associated with the subnetwork.
A distributed database is a database in which portions of the database are stored in multiple physical locations and processing is distributed among multiple database nodes to provide increased availability and performance. To ensure that the multiple database nodes remain current, a replication process may be employed. A replication process may involve, for example, detecting changes in the database nodes and updating each database node such that all of the database nodes become identical to each other. However, such a process is time and resource intensive process. Further, such a process may not be feasible for systems such as internet-of-things (IoT) systems that may include data for billions of nodes.
Computer systems and methods for replicating a portion of a data set to a local repository are disclosed. In particular, computer systems and methods for replicating a portion of a data set to a local repository associated with a subnetwork are disclosed. The data set may be stored on a central repository and associated with one or more subnetworks. Further, the portion of the data set being associated with the subnetwork.
In one embodiment, a method for a device associated with a subnetwork may include obtaining a portion of a data set from a central repository. The data set may be associated with one or more subnetworks, and the portion of the data set may be associated with the subnetwork. The method may further include obtaining a request for data originating from a node in the subnetwork. The requested data may include at least one of (i) the portion of the data set, and (ii) data generated based on the portion of the data set, and the request may be destined for the central repository. In addition, the method may include determining whether the central repository is unavailable to provide the requested data, and providing the requested data to the node after the central repository is determined as being unavailable.
In another embodiment, a device associated with a subnetwork may include one or more processors configured to obtain a portion of a data set from a central repository. The data set may be associated with one or more subnetworks, and the portion of the data set may be associated with the subnetwork. The one or more processors may be further configured to obtain a request for data originating from a node in the subnetwork. The requested data may include at least one of (i) the portion of the data set, and (ii) data generated based on the portion of the data set, and the request may be destined for the central repository. In addition, The one or more processors may be configured determine whether the central repository is unavailable to provide the requested data, and provide the requested data to the node after the central repository is determined as being unavailable.
In yet another embodiment, a non-transitory computer-readable storage medium storing instructions that when executed by a computer cause the computer to perform a method for a device associated with a subnetwork. The method may include obtaining a portion of a data set from a central repository. The data set may be associated with one or more subnetworks, and the portion of the data set may be associated with the subnetwork. The method may further include obtaining a request for data originating from a node in the subnetwork. The requested data may include at least one of (i) the portion of the data set, and (ii) data generated based on the portion of the data set, and the request may be destined for the central repository. In addition, the method may include determining whether the central repository is unavailable to provide the requested data, and providing the requested data to the node after the central repository is determined as being unavailable.
Embodiments are described more fully below with reference to the accompanying drawings, which form a part hereof, and which show specific exemplary embodiments. However, embodiments may be implemented in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope. Embodiments may be practiced as methods, systems or devices. Accordingly, embodiments may take the form of an entirely hardware implementation, an entirely software implementation or an implementation combining software and hardware aspects. The following detailed description is, therefore, not to be taken in a limiting sense.
The logical operations of the various embodiments are implemented (1) as interconnected machine modules within the computing system and/or (2) as a sequence of computer implemented steps running on a computing system. The implementation is a matter of choice dependent on the performance requirements of the computing system implementing the invention. Accordingly, the logical operations making up the embodiments described herein are referred to alternatively as operations, steps or modules.
Overview
Aspects of the present disclosure pertains to computer systems and methods for replicating a portion of a data set to a local repository. In particular, the present disclosure pertains to computer systems and methods for replicating a portion of a data set to a local repository associated with a subnetwork, the data set being stored on a central repository and associated with one or more subnetworks, and the portion of the data set being associated with the subnetwork.
In some embodiments, the replicated portion of the data set on the local repository associated with the subnetwork may be provided to nodes in the same subnetwork, for example, when the nodes request the portion of the data set from the central repository. In one example, when a node associated with the subnetwork requests the portion of the data set from the central repository, the local repository may intercept the request and provide the replicated portion of the data set on the local repository.
In some embodiments, data generated based on the replicated portion of the data set (i.e., derived data) may be provided to nodes associated with the same subnetwork, for example, when the nodes request data generated based on the portion of the data set stored on the central repository. For example, when derived data is requested by a node on the subnetwork, the local repository may intercept the request, generate the requested data based on the replicated portion of the data on the local repository, and provide the generated data to the node.
Examples of Operating Environments
First subnetwork 110 may include a first node 112, a second node 114, and a third node 116. First subnetwork 110 may further include a gateway 118 connecting first node 112, second node 114, and third node 116 to each other and to gateway 140. Second subnetwork 120 may include a fourth node 122, a third subnetwork 150, and a gateway 128. Gateway 128 may connect fourth node 122 and third subnetwork 150 to each other and to gateway 140. Third subnetwork 150 may include a fifth node 124 and a gateway 126 that connects fifth node 124 to gateway 128 (e.g., using a network link 146).
As used herein, a “node” may be any physical or virtual entity capable of communicating via a computer network. For example, a node may be a physical computer, piece(s) of software, internet-of-things device, internet-of-things hub/bridge, virtual machine, server, printer, gateway, router, switch, smartphone/cellular phone, smart watch, tablet, or combination thereof. In some embodiments, a plurality of nodes may be implemented on a single physical or virtual device. Alternatively, or additionally, a single node may be implemented on a plurality of physical and/or virtual devices. In system 100, gateway 118, gateway 126, gateway 128, gateway 140, and central repository 130 may also be considered “nodes.”
As used herein, a “subnetwork” may be any logical grouping of nodes in a network. For example, a subnetwork may include nodes that are grouped based on the nodes' type, geographical location, ownership, performance, cost (e.g., cost of ownership/use), and/or whether the nodes implement certain communication protocols/standards. In another example, a subnetwork may include nodes designated by a system administrator of system 100. In yet another example, a subnetwork may include nodes selected by an algorithm. In some embodiments, a single node may be associated with a plurality of subnetworks. In some embodiments, a subnetwork may be a part of another subnetwork. In some embodiments, nodes in a subnetwork may communicate with each other using a first communication protocol and/or standard (e.g., Ethernet), and nodes in another subnetwork may communicate with each other using a second communication protocol and/or standard (e.g., Fiber-optic Communications). In these embodiments, nodes in the two subnetworks may communicate with each other via one or more gateways. The gateways, as a collective, may be capable of communicating using at least the first and second communication protocols and/or standards. As used herein, a “gateway” may be a node that connects nodes on a subnetwork to a node outside the subnetwork.
As used herein, a “network link” may be any communication component(s) enabling at least one node to communicate with at least one other node. In some embodiments, a network link may include any wired or wireless communication medium that can be used by one or more nodes for communication. Alternatively, or additionally, a network link may include a receiver and/or transmitter that receives and/or transmits data over a wired and/or wireless communication medium. In one example, a network link may be a wireless receiver and/or transmitter. In another example, a network link may be a Ethernet cable connecting two nodes. In this example, the network link may further include Ethernet modules that enables nodes to communicate over the Ethernet cable. In yet another example, a network link may include wireless transceivers.
As shown in
In system 100, central repository 130 may provide portions of data set 135 to various nodes. For example, central repository 130 may obtain a portion of data set 135 and provide the obtained portion of data set 135 to first node 112 via gateway 140 and gateway 118. Alternatively, or additionally, central repository 130 may generate new data based on portions of data set 135, and provide the generated, new data to various nodes. For example, central repository 130 may obtain a portion of data set 135, generate new data based on the portion of data set 135, and provide the generated data to fifth node 124 via gateway 140, gateway 128, and gateway 126. The nodes may use the obtained portions of data set 135 to perform at least some of their intended functions. For example, the nodes may use the portions of data set 135 including identity data to authenticate nodes and/or users. In another example, the nodes may use the portions of data set 135 including blacklists and whitelists to implement a network filter for preventing and mitigating a DDOS attack.
In some embodiments, central repository 130 may provide data to a node by transmitting the data. Alternatively, or additionally, central repository 130 may provide data to a node by making the data available for retrieval (e.g., stored in a data store accessible by the node). Correspondingly, a node may obtain the data provided by central repository 130 by receiving the transmitted data and/or retrieving the data made available for retrieval.
As described above, central repository 130 may also be considered a node. Thus, central repository 130 may be, for example, a physical and/or software executing on a personal computer, an internet-of-things device/hub, virtual machine, server, printer, gateway, router, switch, smartphone/cellular phone, smart watch, or tablet. For example, in in some embodiments, central repository 130 may be implemented on gateway 140. In some embodiments, central repository 130 may include one or more database servers. In some embodiments, at least some functions of central repository 130 may be implemented on a cloud platform, such as Amazon Web Services (AWS), Google Cloud, and Microsoft Azure.
In some embodiments, central repository 130 may include a server and a data store. In these embodiments, the server may obtain data from the data store and provide the obtained data to various nodes. Alternatively, or additionally, the server may obtain data from the data store, generate new data based on the obtained data, and provide the generated data to various nodes.
As used herein, the phrase “data for a node” may refer to any data that may be used by the node. For example, first node 112 may obtain data for first node 112, and first node 112 may perform an action based on the obtained data for first node 112. Alternatively, or additionally, the phrase “data for a node” may refer to any data that may be used to generate new data that may be used by the node. For example, a node (e.g., central repository 130) may generate new data from data for first node 112, and first node 112 may obtain the generated data and perform an action based on the obtained data.
In some situations, however, central repository 130 may be unavailable for some of the nodes in system 200 to access. More particularly, central repository 130 may be inaccessible and/or undesirable to be accessed by one or more nodes in system 200. For example, network link 142 may experience an outage during a scheduled maintenance of network equipment. In another example, network link 144 may be a satellite communication link that may be expensive to use during peak hours. In yet another example, network link 146 may be a wireless network link connecting a portable device (e.g., fifth node 124 and gateway 126) located underground tunnel to gateway 128. Further, central repository 130 may cease to operate, for example, due to a malicious attack (e.g., distributed denial-of-service attack) or other technical issues. Consequently, in these situations, nodes that require the data from data set 135, or data generated based on data from data set 135, may not be able to perform their intended functions unless an alternative data source for such data is available to them.
To that end, in system 200, data set 135 on central repository 130 may be replicated to local repositories (e.g., local repository 220, local repository 230, and local repository 232) when central repository 130 is available to be accessed by the local repositories (e.g., during off peak hours or when central repository 130 is operating normally). Further, the local repositories may be configured to perform at least some of the functions of central repository 130 for the nodes in the same subnetwork using the replicated version of data set 135 stored locally. In one example, during normal operation, data on central repository 130 may be replicated local repository 220 on gateway 118. After central repository 130 become unavailable, local repository 220 may provide first node 112, second node 114, and third node 116 with the replicated data stored in local repository 220. Alternatively, or additionally, after central repository 130 becomes unavailable, local repository 220 may generate new data based on local repository 220's replicated data and provide the newly generate data to first node 112, second node 114, and third node 116. The process used by local repository 220 to generate the new data may be the same, or substantially the same, as the process that would have been used by central repository 130 to generate the new data based on central repository 130's data. Further, similar to central repository 130, a local repository may store the replicated data internally or on a data store accessible by the local repository.
However, replicating the entire data set 135 to multiple local repositories is resource intensive and time consuming. Moreover, in systems such as internet-of-things systems where data set 135 may include data for billions of nodes, replication of data set 135 to multiple local repositories may not be technically and/or economically feasible.
Accordingly, in system 200, portions of data set 135 are selectively replicated to various local repositories. In particular, in some embodiments, a portion of data set 135 that is associated with a subnetwork may be replicated to a local repository associated with the same subnetwork. For example, as shown in
In some embodiments, central repository 130 may dynamically assign nodes (including gateways) to subnetworks. For example, central repository 130 may dynamically assign nodes and gateways to a particular subnetwork based on a current network map of system, changing performance requirements of various nodes, changing availability of various network links, and/or other non-technical factors. Thus, in these embodiments, the portion of data set 135 associated with a subnetwork may change during operation. For example, a new node may be added to a subnetwork, requiring additional data to be included in the portion of data set 135 and replicated to the local repository associated with the subnetwork. In another example, a node may be moved to another subnetwork, requiring data associated with the moved node to be replicated to a local repository in a different subnetwork. In yet another example, a new subnetwork may be created, requiring data to be replicated to an additional local repository associated with the new subnetwork.
Moreover, data set 135 may be altered by one or more users, administrators, or other nodes. For example, data set 135 may include sensor readings from various nodes, and central repository 130 may receive an updated sensor reading from some of the nodes. In another example, data set 135 may be changed by one or more users via a user interface connected to central repository 130. In yet another example, an administrator may directly modify data set 135 stored on central repository 130.
In system 200, central repository 130 may provide updated portions of data set 135 to various local repositories (e.g., local repositories containing outdated data) after data in data set 135 is altered. In some embodiments, central repository 130 may initiate the process to provide the updated portions of data set 135 to the local repositories. That is, central repository 130 “push” the updated portions of data set 135 to local repositories.
In some embodiments, portions of data set 135 may be provided to local repositories using one or more trusted communications. As used herein, a trusted communication is a communication where the recipient may verify the identity of the sender. For example, in system 200, a portion of data set 135 may be signed (i.e., generate a signatures) using one or more private keys, and the generated signature may be provided to the local repository. The local repository, prior to accepting the provided portion of data set 135, may verify the signature using one or more corresponding public keys. In some embodiments, portions of data set 135 may be provided to local repositories using encrypted communications.
In some embodiments, a local repository associated with a subnetwork, or a node that includes the local repository, may intercept requests for data that are destined for central repository 130 and originating from nodes in the same subnetwork. Further, in response to the request, the local repository or the node that includes the local repository may provide the requested data using the replicated data stored on the local repository. As an example, in system 200, first node 112 may transmit a request for data for first node 112 destined for central repository 130. In situations where central repository 130 is available, central repository 130 may receive the request, and provide the requested data (i.e., data for first node 112 stored on central repository 130) to first node 112. However, in situations where central repository 130 is unavailable, gateway 118 may intercept and respond to the request by providing the requested data to first node 112 using the replicated data for first node 112 stored on local repository 220.
In some embodiments, local repositories may provide the requested data to the nodes such that the nodes may process the data in the same, or substantially the same, manner as the data that was provided by central repository 130. For example, the data provided by local repositories may be indistinguishable from the data provided by central repository 130. In another example, the data provided by the local repositories may be in the same format, or in a substantially the same format, as the data provided by central repository 130. In yet another example, the data provided by local repositories may be signed using a private key associated with central repository 130. For example, the data provided by local repositories may be signed using a private key shared with central repository 130 or derived from a private key accessible by central repository 130. In some embodiments, local repositories, after determining that central repository 130 is unavailable, may prevent the request from reaching central repository 130.
In some embodiments, local repositories may be implemented on a plurality of nodes. For example, local repositories may be implemented on a plurality of gateway devices on the same subnetwork. In these embodiments, each node in the plurality of nodes may have its own copy of the replicated portion of data set 135. Alternatively, the replicated portion of data set 135 may be distributed among the plurality of nodes. In some embodiments, local repositories may be implemented on edge nodes (e.g., first node 112, second node 114, and third node 116).
Local repositories may determine the availability of central repository 130 in numerous way. In some embodiments, a network policy may define conditions in which central repository 130 is to be considered as being available or unavailable. The conditions may include, for example, time/date at which central repository 130 may be available. In some embodiments, central repository 130 may provide local repositories with communications indicating that central repository 130 is available or unavailable. A local repository may determine that central repository 130 is available or unavailable if such a communication was received within a predetermined amount of period. Alternatively, or additionally, a local repository may determine the availability of central repository 130 by providing a status request to central repository 130. In response, central repository 130 may provide the node with the status. The node may determine that central repository 130 is unavailable in the absence of a response from central repository 130.
Although in system 200, local repositories are shown to be accessible by, and/or included in gateways, a local repository associated with a subnetwork may be made accessible and/or included in any node that can be accessed by nodes in the same subnetwork. For example, in system 200, local repository 220 may be made accessible and/or included in third node 116. In another example, local repository 232 may be made accessible and/or included in fourth node 122. In yet another example, a local repository may also be accessible by, and/or included in, gateway 140. Such a local repository may store, for example, data for nodes in first subnetwork 110, second subnetwork 120, and third subnetwork 150. In some embodiments, as shown in
In addition to enabling data in data set 135 to be provided to nodes even when central repository 130 is not available, replicating portions of data set 135 to local repositories may provide numerous benefits for various types of systems. In one example, performance of a node may be improved because data needed by the node may be obtained from a local repository which may be accessed with less latency. To that end, performance may be further improved by including a local repository close to an edge node (e.g., at a local gateway) and/or in the edge node itself. In another example, cost of operating system 200 may be reduced because data needed by a node may be obtained from a local repository which may incur less cost (e.g., when global network links such as network link 142 are charged usage fee) than obtaining the data from central repository 130. Moreover, the reduced data traffic to and from central repository 130 may enable system 200 to handle additional number of nodes.
In system 300, nodes may request various types of data from central repository 130, and the requested data may be required by the nodes to perform at least some of their intended functions. In the example of
In these embodiments, attributes of the nodes in a subnetwork (stored on central repository 130) may be selectively replicated to a local repository in the same subnetwork. As shown in
In some embodiments, a node may request data generated based on the data stored in central repository 130, and after determining that central repository 130 is unavailable, a local repository may intercept the request, generate the requested data based on the replicated data stored in the local repository, and provide the generated data to the node. In one example, a computer in office building 320 may request a list of printers with a particular set of attributes. In this example, if central repository 130 is unavailable, gateway 128 or local repository 230 may perform a query on the replicated data stored on local repository 230 to generate the requested list and provide the generated list to the requesting computer.
In system 300, an administrator may add, change, or remove attributes for nodes in system 300 by changing the data stored on central repository 130. For example, an interface may be available to provide an administer with options to add, change, or remove the attribute data on central repository 130. In some embodiments, the attributes may change in response to changes alteration of a node's configuration and removal/addition of a node. For example, a node's network configuration may change causing the node's IP address to change. In another example, a new node may be added or an existing node may be removed, requiring the attributes for the node to be added or removed.
After the attribute data on central repository 130 is altered, the changes may be propagated to the local repositories. For example, if attributes for one of the nodes in the bottom half of office building 320 is changed, central repository 130 may initiate a process to replicate the updated attributes to both local repository 230 and local repository 232.
As shown in
In the example of
In system 400, verifying that the sensor readings are indeed from an authorized sensor (i.e., IoT sensor 112) may enable system 400 to prevent and/or mitigate malicious attacks on system 400 such as an attack spoofing IoT sensor 112 in an attempt to inject false sensor readings to system 400. After receiving the sensor readings and the signature, IoT hub 114 may attempt to verify IoT sensor 112's signature before processing the received sensor readings.
In some embodiments, IoT hub 114 may verify IoT sensor 112's signature by obtaining and using IoT sensor 112's public key. In the example of
However, in system 400, oil rig 410 may not have a continuous connection to central repository 130. For example, satellite links 425 may not be available during storm or cloudy days. Consequently, in these situations, IoT hub 114 may not be to verify that the sensors readings are indeed from IoT sensor 112 unless an alternative data source for the public keys is available to IoT hub 114. To that end, in system 400, a subset of the public keys stored on central repository 130 may be replicated to local repositories (e.g., on-site gateway 118). Further, the local repositories may intercept the request for the public keys destined for central repository 130 and provide the requested public keys to IoT hub 114.
Alternatively, or additionally, in some embodiments, IoT hub 114 may verify IoT sensor's signature by requesting another node (e.g., central repository 130) to verify the signature. For example, IoT hub 114 may attempt to provide the obtained sensor readings and IoT device 112's signature to central repository 130. If central repository 130 is available, central repository 130 may verify the signature using IoT sensor 112's public key stored on central repository 130 and respond to IoT hub 114 with a communication indicative of whether the signature is valid or not. If central repository 130 is unavailable, on-site gateway 118 may intercept the sensor readings and IoT sensor 112's signature, verify the IoT sensor 112's signature using replicated version of IoT sensor 112's public key, and respond to IoT hub 114 with a communication indicative of whether the signature is valid or not. Thus, even when central repository 130 is unavailable, trusted communications between the nodes in oil rig 410 may be possible.
An Example of a Process
At a step 502, central repository 130 may identify a portion of a data set 135 that is associated with a subnetwork. In one example, data set 135 may include data for nodes that are in one or more subnetworks, and the portion of data set 135 may include data for nodes that are in the subnetwork of the one or more subnetworks. In another example, data set 135 may include data for nodes that are in a plurality of subnetworks, and the portion of data set 135 may include data for nodes that are in the subnetwork of the plurality of subnetworks.
At a step 504, central repository 130 may provide the identified portion of the data set 135 to a local repository associated with the subnetwork. In some embodiments, central repository 130 may initiate a process to replicate the identified portion of data set 135 to a local repository associated with the subnetwork. The local repository associated with the subnetwork may include, for example, a gateway connected to at least one node in the subnetwork. Alternatively, the local repository may be implemented on an edge node in the subnetwork.
At an optional step, central repository 130 may provide updates to the identified portion of the data set 135. For example, after the portion of the data set 135 is altered on central repository 130, central repository 130 may initiate a process to provide the updated portion of the data set 135 to the local repository.
At a step 506, the local repository may obtain the portion of the data set 135 provided by central repository 130. In some embodiments, the local repository may store the obtained portion of the data set 135 on a data store within the local repository and/or on a data store accessible by the local repository.
At an optional step, the local repository may obtain the updates to the identified portion of the data set 135. After obtaining the updates, the local repository may apply the updates to the portion of the data set 135 on the local repository.
At a step 508, a node in the subnetwork may provide a request for data. The request may originate from the node in the subnetwork, and the requested data may include at least one of (i) the portion of the data set 135, and (ii) data generated based on the portion of the data set 135. Further, the request may be destined for the central repository 130.
At a step 510, the local repository may obtain the request for data originating from the node in the subnetwork. For example, the local repository may intercept the request for data destined for central repository 130. In some embodiments, the local repository may prevent the request from reaching central repository 130.
At a step 512, the local repository may determine whether central repository 130 is unavailable to provide the requested data to the node. As discussed above, a local repository may determine the availability of central repository 130 in numerous ways. In some embodiments, a network policy may define conditions in which central repository 130 is to be considered as being available or unavailable. In these embodiments, the local repository may access the network policy (e.g., by accessing a policy server). The conditions may include, for example, time/date at which central repository 130 may be available. In some embodiments, as discussed above, central repository 130 may provide local repositories with communications indicating that central repository 130 is available or unavailable. A local repository may determine that central repository 130 is available or unavailable if such a communication was received within a predetermined amount of period. Alternatively, or additionally, a local repository may determine the availability of central repository 130 by providing a status request to central repository 130. In response, central repository 130 may provide the node with the status. The node may determine that central repository 130 is unavailable in the absence of a response from central repository 130.
At a step 514, the local repository may provide the requested data to the node after the central repository is determined as being unavailable. At a step 516 the node may obtain the requested data. At a step 518, the node may process the requested data. In some embodiments, the node may perform an action based on the requested data.
While illustrative embodiments have been described herein, the scope of any and all embodiments having equivalent elements, modifications, omissions, combinations (e.g., of aspects across various embodiments), adaptations and/or alterations as would be appreciated by those skilled in the art based on the present disclosure. The limitations in the claims are to be interpreted broadly based on the language employed in the claims and not limited to examples described in the present specification or during the prosecution of the application. The examples are to be construed as non-exclusive. Furthermore, the steps of the disclosed routines may be modified in any manner, including by reordering steps and/or inserting or deleting steps. It is intended, therefore, that the specification and examples be considered as illustrative only, with a true scope and spirit being indicated by the following claims and their full scope of equivalents.
This application is a continuation of U.S. application Ser. No. 15/652,089, filed on Jul. 17, 2017, titled “SYSTEMS AND METHODS FOR DISTRIBUTING PARTIAL DATA TO SUBNETWORKS,” which is a continuation-in-part of U.S. application Ser. No. 15/588,533, filed on May 5, 2017, titled “SYSTEMS AND METHODS FOR ENABLING TRUSTED COMMUNICATIONS BETWEEN ENTITIES,” which claims priority to U.S. Provisional Application No. 62/332,271, filed on May 5, 2016, titled “DEVICE AUTHENTICATION USING A CENTRAL REPOSITORY.” This application also claims priority to U.S. Provisional Application No. 62/469,346, filed on Mar. 9, 2017, titled “METHODS AND SYSTEMS FOR IDENTITY MANAGEMENT.” Further, this application is related to U.S. application Ser. No. 15/652,098, titled “SYSTEMS AND METHODS FOR ENABLING TRUSTED COMMUNICATIONS BETWEEN CONTROLLERS,” U.S. application Ser. No. 15/652,108, titled “SYSTEMS AND METHODS FOR MITIGATING AND/OR PREVENTING DISTRIBUTED DENIAL-OF-SERVICE ATTACKS,” and U.S. application Ser. No. 15/652,114, titled “SYSTEMS AND METHODS FOR VERIFYING A ROUTE TAKEN BY A COMMUNICATION,” each of which filed on Jul. 17, 2017. The disclosures of the above applications are hereby incorporated by reference in their entirety for all purposes.
Number | Name | Date | Kind |
---|---|---|---|
5455865 | Perlman | Oct 1995 | A |
5715314 | Payne et al. | Feb 1998 | A |
6209091 | Sudia et al. | Mar 2001 | B1 |
6263446 | Kausik et al. | Jul 2001 | B1 |
6381331 | Kato | Apr 2002 | B1 |
6801534 | Arrowood et al. | Oct 2004 | B1 |
6826690 | Hind et al. | Nov 2004 | B1 |
6850951 | Davison | Feb 2005 | B1 |
7266695 | Nakayama | Sep 2007 | B2 |
7320073 | Zissimopoulos et al. | Jan 2008 | B2 |
7428750 | Dunn et al. | Sep 2008 | B1 |
7522723 | Shaik | Apr 2009 | B1 |
8023647 | Shaik | Sep 2011 | B2 |
8161155 | Van Der Merwe et al. | Apr 2012 | B2 |
8229484 | Anisimov | Jul 2012 | B2 |
8255921 | Arvidsson | Aug 2012 | B2 |
8561187 | Hegil | Oct 2013 | B1 |
8578468 | Yadav | Nov 2013 | B1 |
8726379 | Stiansen | May 2014 | B1 |
8769304 | Kirsch | Jul 2014 | B2 |
8817786 | Kassi-Lahlou et al. | Aug 2014 | B2 |
8819825 | Keromytis et al. | Aug 2014 | B2 |
9027135 | Aziz | May 2015 | B1 |
9043274 | Cheng | May 2015 | B1 |
9094811 | Rosen | Jul 2015 | B1 |
9177005 | Mehta et al. | Nov 2015 | B2 |
9197673 | Gaddy | Nov 2015 | B1 |
9203819 | Fenton et al. | Dec 2015 | B2 |
9215223 | Kirsch | Dec 2015 | B2 |
9344413 | Kirsch | May 2016 | B2 |
9356942 | Joffe | May 2016 | B1 |
9418222 | Rivera et al. | Aug 2016 | B1 |
9430646 | Mushtaq et al. | Aug 2016 | B1 |
9485231 | Reese | Nov 2016 | B1 |
9521240 | Rosen | Dec 2016 | B1 |
9674222 | Joffe | Jun 2017 | B1 |
9762458 | Kim et al. | Sep 2017 | B2 |
9819556 | Kim et al. | Nov 2017 | B2 |
9832217 | Berger et al. | Nov 2017 | B2 |
10158539 | Kim et al. | Dec 2018 | B2 |
10231268 | Pathuri et al. | Mar 2019 | B2 |
10244107 | Sena | Mar 2019 | B1 |
10314088 | Pathuri et al. | Jun 2019 | B2 |
10404472 | Knopf | Sep 2019 | B2 |
10958725 | Knopf | Mar 2021 | B2 |
11025428 | Knopf | Jun 2021 | B2 |
11108562 | Knopf | Aug 2021 | B2 |
11277439 | Knopf | Mar 2022 | B2 |
11665004 | Knopf | May 2023 | B2 |
11804967 | Knopf | Oct 2023 | B2 |
20010024502 | Ohkuma et al. | Sep 2001 | A1 |
20020076055 | Filipi-Martin | Jun 2002 | A1 |
20020194163 | Hopeman et al. | Dec 2002 | A1 |
20030065947 | Song | Apr 2003 | A1 |
20030074456 | Yeung | Apr 2003 | A1 |
20030110397 | Supramaniam | Jun 2003 | A1 |
20030147534 | Ablay et al. | Aug 2003 | A1 |
20030177400 | Raley et al. | Sep 2003 | A1 |
20030195924 | Franke | Oct 2003 | A1 |
20030204511 | Brundage et al. | Oct 2003 | A1 |
20040062400 | Sovio et al. | Apr 2004 | A1 |
20040088587 | Ramaswamy et al. | May 2004 | A1 |
20040172557 | Nakae et al. | Sep 2004 | A1 |
20040176123 | Chin et al. | Sep 2004 | A1 |
20040205342 | Roegner | Oct 2004 | A1 |
20050010447 | Miyasaka et al. | Jan 2005 | A1 |
20050036616 | Huang et al. | Feb 2005 | A1 |
20050044402 | Libin et al. | Feb 2005 | A1 |
20050054380 | Michaelis | Mar 2005 | A1 |
20050097320 | Golan et al. | May 2005 | A1 |
20050132060 | Mo et al. | Jun 2005 | A1 |
20050220080 | Ronkainen | Oct 2005 | A1 |
20050220095 | Narayanan et al. | Oct 2005 | A1 |
20060059551 | Borella | Mar 2006 | A1 |
20060080534 | Yeap et al. | Apr 2006 | A1 |
20060083187 | Dekel | Apr 2006 | A1 |
20060090166 | Dhara et al. | Apr 2006 | A1 |
20060101516 | Sudaharan | May 2006 | A1 |
20060129817 | Borneman et al. | Jun 2006 | A1 |
20060131385 | Kim | Jun 2006 | A1 |
20060206709 | Labrou et al. | Sep 2006 | A1 |
20060224508 | Fietz | Oct 2006 | A1 |
20060236095 | Smith | Oct 2006 | A1 |
20070061263 | Carter et al. | Mar 2007 | A1 |
20070198437 | Eisner et al. | Aug 2007 | A1 |
20070228148 | Rable | Oct 2007 | A1 |
20080016232 | Yared et al. | Jan 2008 | A1 |
20080028453 | Nguyen et al. | Jan 2008 | A1 |
20080028463 | Dagon | Jan 2008 | A1 |
20080046987 | Spector | Feb 2008 | A1 |
20080089520 | Kessler | Apr 2008 | A1 |
20080141313 | Kato et al. | Jun 2008 | A1 |
20080163354 | Ben-Shalom et al. | Jul 2008 | A1 |
20080189778 | Rowley | Aug 2008 | A1 |
20080222711 | Michaelis | Sep 2008 | A1 |
20080250248 | Lieber | Oct 2008 | A1 |
20090037994 | Buss et al. | Feb 2009 | A1 |
20090080408 | Natoli et al. | Mar 2009 | A1 |
20090089625 | Kannappan et al. | Apr 2009 | A1 |
20090097488 | Kassi-Lahlou et al. | Apr 2009 | A1 |
20090119778 | Bhuyan | May 2009 | A1 |
20090157799 | Sukumaran et al. | Jun 2009 | A1 |
20090228577 | Webb-Johnson | Sep 2009 | A1 |
20090245176 | Balasubramanian et al. | Oct 2009 | A1 |
20090249014 | Obereiner | Oct 2009 | A1 |
20090249497 | Fitzgerald | Oct 2009 | A1 |
20090260064 | McDowell | Oct 2009 | A1 |
20100003959 | Coppage | Jan 2010 | A1 |
20100077457 | Xu et al. | Mar 2010 | A1 |
20100100945 | Ozzie et al. | Apr 2010 | A1 |
20100100950 | Roberts | Apr 2010 | A1 |
20100161969 | Grebovich et al. | Jun 2010 | A1 |
20100162396 | Liu | Jun 2010 | A1 |
20100174439 | Petricoin, Jr. et al. | Jul 2010 | A1 |
20100182283 | Sip | Jul 2010 | A1 |
20100185869 | Moore et al. | Jul 2010 | A1 |
20100210240 | Mahaffey et al. | Aug 2010 | A1 |
20100235919 | Adelstein et al. | Sep 2010 | A1 |
20100260337 | Song et al. | Oct 2010 | A1 |
20100275009 | Canard et al. | Oct 2010 | A1 |
20100306107 | Nahari | Dec 2010 | A1 |
20100316217 | Gammel et al. | Dec 2010 | A1 |
20100325685 | Sanbower | Dec 2010 | A1 |
20110009086 | Poremba et al. | Jan 2011 | A1 |
20110067095 | Leicher et al. | Mar 2011 | A1 |
20110078439 | Mao et al. | Mar 2011 | A1 |
20110103393 | Meier et al. | May 2011 | A1 |
20110126264 | Dunstan | May 2011 | A1 |
20110167494 | Bowen et al. | Jul 2011 | A1 |
20110179475 | Foell et al. | Jul 2011 | A1 |
20110222466 | Pance | Sep 2011 | A1 |
20110246765 | Schibuk | Oct 2011 | A1 |
20110252459 | Walsh | Oct 2011 | A1 |
20110282997 | Prince | Nov 2011 | A1 |
20120042381 | Antonakakis | Feb 2012 | A1 |
20120050455 | Santamaria | Mar 2012 | A1 |
20120124379 | Teranishi | May 2012 | A1 |
20120155637 | Lambert et al. | Jun 2012 | A1 |
20120158725 | Molloy et al. | Jun 2012 | A1 |
20120197911 | Banka et al. | Aug 2012 | A1 |
20120233685 | Palanigounder et al. | Sep 2012 | A1 |
20120265631 | Cronic et al. | Oct 2012 | A1 |
20120320912 | Estrada | Dec 2012 | A1 |
20120324076 | Zerr et al. | Dec 2012 | A1 |
20120324242 | Kirsch | Dec 2012 | A1 |
20120331296 | Levin | Dec 2012 | A1 |
20130133072 | Kraitsman et al. | May 2013 | A1 |
20130198078 | Kirsch | Aug 2013 | A1 |
20130198516 | Fenton et al. | Aug 2013 | A1 |
20130198598 | Kirsch | Aug 2013 | A1 |
20130198834 | Kirsch | Aug 2013 | A1 |
20130205136 | Kirsch | Aug 2013 | A1 |
20130239169 | Nakhjiri | Sep 2013 | A1 |
20130246272 | Kirsch | Sep 2013 | A1 |
20130246280 | Kirsch | Sep 2013 | A1 |
20140198791 | Lim | Jul 2014 | A1 |
20140214902 | Mehta et al. | Jul 2014 | A1 |
20140244818 | Taine | Aug 2014 | A1 |
20140283061 | Quinlan et al. | Sep 2014 | A1 |
20140344904 | Venkataramani | Nov 2014 | A1 |
20140351596 | Chan | Nov 2014 | A1 |
20150033024 | Mashima et al. | Jan 2015 | A1 |
20150047032 | Hannis et al. | Feb 2015 | A1 |
20150088754 | Kirsch | Mar 2015 | A1 |
20150321557 | Kim et al. | Nov 2015 | A1 |
20150326588 | Vissamsetty | Nov 2015 | A1 |
20160105344 | Kim et al. | Apr 2016 | A1 |
20160105345 | Kim et al. | Apr 2016 | A1 |
20160105359 | Kim et al. | Apr 2016 | A1 |
20160165651 | Pathuri et al. | Jun 2016 | A1 |
20160173505 | Ichihara | Jun 2016 | A1 |
20160197932 | Hoffman | Jul 2016 | A1 |
20160241509 | Akcin | Aug 2016 | A1 |
20160261413 | Kirsch | Sep 2016 | A1 |
20170079079 | Pathuri et al. | Mar 2017 | A1 |
20170324564 | Knopf | Nov 2017 | A1 |
20170339115 | Cho et al. | Nov 2017 | A1 |
20170359323 | Weis | Dec 2017 | A1 |
20170366575 | Polepalli et al. | Dec 2017 | A1 |
20180013569 | Knopf | Jan 2018 | A1 |
20180013570 | Knopf | Jan 2018 | A1 |
20180013786 | Knopf | Jan 2018 | A1 |
20180013824 | Knopf | Jan 2018 | A1 |
20180103057 | St. Pierre | Apr 2018 | A1 |
20180167393 | Walrant | Jun 2018 | A1 |
20180183603 | Liu et al. | Jun 2018 | A1 |
20180229739 | Imamoto et al. | Aug 2018 | A1 |
20190124199 | Sena | Apr 2019 | A1 |
20220123946 | Knopf | Apr 2022 | A1 |
20220231859 | Knopf et al. | Jul 2022 | A1 |
20230035336 | Knopf | Feb 2023 | A1 |
20230362014 | Knopf | Nov 2023 | A1 |
20230388131 | Knopf | Nov 2023 | A1 |
Number | Date | Country |
---|---|---|
0683582 | Nov 1995 | EP |
20130083619 | Jul 2013 | KR |
Entry |
---|
Abe, Tsuyoshi et al., “Implementing Identity Provider on Mobile Phone,” 2007, ACM, pp. 46-52. |
Balasubramaniam, Sriram et al., “Identity Management and its Impact on Federation in a System-of-Systems Context,” Mar. 23-26, 2009, IEEE, pp. 179-182. |
Beasley, Jim et al., “Virtual Bluetooth Devices as a Means of Extending Pairing and Bonding in a Bluetooth Network,” 2002, IEEE, vol. 4, pp. 2087-2089. |
Chen, Liqun et al., Multiple Trusted Authorities in Identified Based Cryptography from Pairings on Elliptic Curves, Mar. 19, 2003, HP, pp. 1-26. |
International Search Report and Written Opinion of PCT/US2012/042743 dated Sep. 18, 2012, 10 pages. |
International Search Report and Written Opinion of PCT/US2013/022207 dated Mar. 29, 2013, 11 pages. |
International Search Report and Written Opinion of PCT/US2017/31438 dated Jul. 25, 2017, 8 pages. |
International Search Report and Written Opinion of PCT/US2018/21877 dated May 30, 2018, 8 pages. |
International Search Report and Written Opinion of PCT/US2018/42524 dated Oct. 30, 2018, 8 pages. |
International Search Report and Written Opinion of PCT/US2018/42508 dated Dec. 13, 2018, 8 pages. |
Jansen, Wayne et al., “Guidelines on Cell Phone and PDA Security,” Oct. 2008, NIST, pp. 1-52. |
Kholmatov, Alisher et al., “Identity Authentication Using Improved Online Signature Verification Method,” Nov. 2005, ScienceDirect, vol. 26, Issue 15, pp. 2400-2408. |
Masmoudi, Khaled et al., “Building Identity-Based Security Associations for Provider—Provisioned Virtual Private Networks,” Dec. 2008, Springer, vol. 39, Issue 3, pp. 215-222. |
Mayrhofer, Rene et al., “Shake Well Before Use: Intuitive and Secure Pairing of Mobile Devices,” Feb. 27, 2009, IEEE, vol. 8, Issue 6, pp. 792-806. |
Nguyen, Lan et al., “Secure Authorization, Access Control and Data Integrity in Bluetooth,” 2002, IEEE, pp. 428-433. |
Novotny, Jason et al., “An Online Credential Repository for the Grid: MyProxy,” 2001, IEEE, pp. 104-111. |
Poggi, Nicolas et al., “Automatic Detection and Banning of Content Stealing Bots for E-Commerce, ” NIPS 2007 Workshop on Machine Learning in Adversarial Environments for Computer Security, 2 pages, Dec. 8, 2007. |
So-In, Chakchai et al., “Virtual ID: A Technique for Mobility, MultiHoming, and Location Privacy in Next Generation Wireless Networks,” Jan. 9-12, 2010, IEEE, pp. 1-5. |
Squicciarini, Anna Cinzia et al., “Access Control Strategies for Virtualized Environments in Grid Computing Systems, ” Mar. 21-23, 2007, IEEE, pp. 48-54. |
Tangswongsan, Supachai et al., “A Model of Network Security with Prevention Capability by Using Decoy Technique,” World Academy of Science, Engineering and Technology 29, pp. 184-189, 2007. |
Heer, Tobias, et al., “ALPHA: an adaptive and lightweight protocol for hop-by-hop authentication.” Proceedings of the 2008 ACM CoNEXT Conference. ACM, 2008. |
Lu, Bin, and Udo W. Pooch, “A lightweight authentication protocol for mobile ad hoc networks,” International Conference on Information Technology: Coding and Computing (ITCC'05)-vol. II, vol. 2, IEEE, 2005. |
Sanzgiri, Kimaya, et al., “A secure routing protocol for ad hoc networks,” 10th IEEE International Conference on Network Protocols, 2002, Proceedings, IEEE, 2002. |
Request for Comments: 7159. The JavaScript Object Notation (JSON) Data Interchange Format, Mar. 2014. |
Number | Date | Country | |
---|---|---|---|
20220046088 A1 | Feb 2022 | US |
Number | Date | Country | |
---|---|---|---|
62469346 | Mar 2017 | US | |
62332271 | May 2016 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15652089 | Jul 2017 | US |
Child | 17209517 | US | |
Parent | 15588533 | May 2017 | US |
Child | 15652089 | US |