The subject technology relates to systems and methods for implementing incremental machine learning techniques across multiple geographic domains, and particularly for maintaining data sovereignty compliance for sovereign regions in which training data cannot be exported.
Data sovereignty is the concept that information stored in a digital form is subject to the laws of the country in which it is located. Many of the current concerns that surround data sovereignty relate to enforcing privacy regulations and preventing data that is stored in a foreign country from being subpoenaed by the host country's government.
The wide-spread adoption of cloud computing services as well as new approaches to data storage, including object storage, have broken down traditional geopolitical barriers. In response, many countries have regulated new compliance requirements by amending their current laws or enacting legislation that requires customer data to be kept within the country in which the customer resides.
Certain features of the subject technology are set forth in the appended claims. However, the accompanying drawings, which are included to provide further understanding, illustrate disclosed aspects and together with the description serve to explain the principles of the subject technology. In the drawings:
The detailed description set forth below is intended as a description of various configurations of the subject technology and is not intended to represent the only configurations in which the technology can be practiced. The appended drawings are incorporated herein and constitute a part of the detailed description. The detailed description includes specific details for the purpose of providing a more thorough understanding of the technology; however, it will be clear and apparent that the subject technology is not limited to the specific details set forth herein and may be practiced without these details. In some instances, structures and components are shown in block diagram form in order to avoid obscuring certain concepts.
Overview:
Aspects of the subject disclosure describe solutions for implementing incremental machine learning techniques between sovereign regions for which data export is restricted. As discussed in further detail below, data sovereignty restrictions can restrict the export of certain types of data, such as different types of user data or personal information, that are useful for initializing and training various machine learning models. Using incremental machine learning methods, a given machine learning model can be trained and updated using only data from users residing in the same (sovereign) region. Once trained, the machine learning model can be exported for use in a different sovereign region, without the need to violate export controls by transferring any actual training data. Consequently, the machine learning model can be used in additional sovereign regions, and subsequently updated/trained with data that may also be export restricted, without violating export controls for user data of any sovereign region.
In some aspects, systems of the subject technology are configured to perform operations including receiving a machine learning model (“ML model”) via a first coordination agent, the ML model based on a first training data set corresponding with a first sovereign region, sending the ML model to a second coordination agent in a second sovereign region, wherein the second sovereign region is different from the first sovereign region, and receiving a second ML model from the second coordination agent, wherein the second ML model is based on updates to the original ML model using a second training data set corresponding with the second sovereign region.
Description:
Various machine learning techniques involve the configuration or “training” of a machine learning (ML) model, for example, using “training data” for which the desired outputs, labels, and/or target classification categories are known. Generally, ML models can be improved through exposure to greater amounts of training data. For example, some ML algorithms use historical data points (X) and labels (Y) to train a model Y=F(X) that can be used to predict labels (Y). The predictive power of the model Y=F(X) is generally improved as the model is presented with greater amounts of training data, e.g., shown a greater number of examples of the relationship between historical data points (X), and labels (Y).
With conventional ML, the only way to update the model Y=F(X) is to perform batch training using all historical data, e.g., all historical data (X) and corresponding labels (Y). Conventional ML training has been improved with incremental ML techniques, which eliminate the need for batch training by allowing models to be updated incrementally, e.g., as soon as new training data become available. However, incremental ML techniques do not address data availability barriers imposed by data sovereignty regulations, which limit the total amount of data available for ML model training. For example, data sovereignty regulations prohibit the export of certain types of data (e.g., user data and personal information) and can therefore impose significant restrictions on ML algorithms deployed in cloud environments and in which implementation is stretched over multiple different sovereign regions.
Aspects of the disclosed technology address the foregoing limitations imposed by data sovereignty regulations by employing incremental ML techniques in which ML models are exported between various sovereign regions, without violating data export controls. As discussed in further detail below, the coordination of ML model distribution, and continued ML model updates/training can be facilitated through the use of a centralized system i.e. a “coordination server.” Alternatively, ML model distribution can be coordinated using a distributed (e.g., peer-to-peer) communication scheme.
It is understood that the described techniques can be applied to a variety of machine learning and/or classification algorithms, and that the scope of the technology is not limited to a specific machine learning implementation. By way of example, implementations of the technology can include the coordination and distribution of incremental ML models based on one or more classification algorithms, including but not limited to: a Multinomial Naive Bayes classifier, a Bernoulli Naive Bayes classifier, a Perceptron classifier, a Stochastic Gradient Descent (SGD) Classifier, and/or a Passive Aggressive Classifier, or the like.
In some aspects, ML models can be configured to perform various types of regression, for example, using one or more regression algorithms, including but not limited to: a Stochastic Gradient Descent Regressor, and/or a Passive Aggressive Regressor, etc. ML models can also be based on clustering algorithms (e.g., a Mini-batch K-means clustering algorithm), a recommendation algorithm (e.g., a Miniwise Hashing algorithm, or Euclidean LSH algorithm), and/or an anomaly detection algorithm, such as a Local outlier factor. Additionally, ML models can employ a dimensionality reduction approach, such as, one or more of: a Mini-batch Dictionary Learning algorithm, an Incremental Principal Component Analysis (PCA) algorithm, a Latent Dirichlet Allocation algorithm, and/or a Mini-batch K-Means algorithm, etc.
In this example, merged training set 105 is used to produce machine learning model 109 that is used to serve each sovereign region, e.g., each of Country A, Country B, and Country C, in the form of global models 106A, 106B, and 106C, respectively. As discussed above, incremental machine learning techniques can be implemented at each of the separate sovereign regions, and used to update the respectively provided global model with new data associated with that region. For example, using incremental machine learning updates, global model 106A can be updated with new data 107A, associated with Country A. In turn, global model 106B is updated with new data 107B resident in Country B, and global model 106C is updated with new data 107C, residing in Country C.
The sharing of training data represented by topology 101 is advantageous in many ML implementations due to the greater availability of training data. However, in practice, restrictions on data export (data sovereignty regulations) often prohibit the sharing of training data sets outside of their respective sovereign regions.
As illustrated with respect to Country A, machine learning algorithm 119A is trained using training set 115A to produce local model 120A. In turn, local model 120A is updated based on new data 117A, all of which reside in, and are not exported from Country A. Country B, and Country C are subject to similar restrictions. As such, the ML algorithm used in Country B (machine learning 119B) can only be initialized using training set 115B, and local model 120B updated using new data 117B. The ML algorithm used in Country C (e.g., machine learning 119C) can only be initialized using training set 115C; similarly, local model 120C is only updated using new data 117C. That is, none of the ML algorithms, or subsequent models, can take advantage of data sets and/or new data provided by outside sovereign regions.
The data provided in
Subsequently, new data points 310 can be provided to machine learning model 308, which performs label predictions outputted as “predicted labels” 312. In the incremental ML model illustrated by topology 300, incremental machine learning algorithm 302 can be continuously or periodically updated without the need to perform retraining on labels 304 and/or data points 306. For example, new data points 310, when accompanied by new inputs 314, can be used to update incremental machine learning algorithm 302, (depicted as 302′ in
As illustrated, network topology 400 includes three distinct sovereign regions in which ML deployments are implemented, i.e., Country A, Country B, and Country C. In this example, data sovereignty regulations exist for each country, restricting export of any potential data (e.g., user information or other privacy protected data) that may be included in training data sets, e.g., 403A, 403B and/or 403C, and new data sets e.g., 405A, 405B, and 405C.
Training can be performed on an incremental machine learning algorithm, for example, to produce an ML model that can then be exported to other regions or jurisdictions, without the need to export training data. The trained ML model provides a mathematical relationship (e.g., a function) relating inputs to a specified output parameter (e.g., a customer “churn rate”), and does not include restricted information types. Therefore, export of the trained ML model does not trigger sovereignty restrictions.
In practice, incremental machine learning algorithm 402A is initialized/trained using training data set 403A, in Country A. After training is complete, a first version (e.g., ver1) of ML model 409A is produced. As illustrated, ML model 409A is used to perform machine learning on new data 405A to produce classifications/labels 407A, for application in Country A. Due to data sovereignty restrictions, data contained within training data set 403A, new data 405A, and labels 407A are potentially subject to restriction and cannot be exported from their current sovereign region, e.g., Country A.
To gain the benefit of training performed to produce ML model 409A (e.g., using training data 403A and new data 405A), ML model 409A is exported to Country B. Because ML model 409A provides only a mathematical relationship between input data (X) and output labels (Y), the actual information comprising ML model 409A is not subject to export controls.
Once exported to Country B, ML model 409A is subject to further training, e.g., now as incremental machine learning algorithm 402B. Training is performed using training data 405B resident in Country B, and also subject to export control. The results of further training are used to produce a second version (e.g., ver2) of ML model 409B, and using incremental machine learning techniques, ML model 409B is further updated using new data 405B, resident in Country B. Therefore, ML model 409B represents the cumulative training performed on incremental machine learning algorithm 402A, using training data sets 403A and 403B, as well as new data sets 405A, and 405B. By exporting ML model 409A, the benefit of access to greater amounts of training data can accrue to machine learning implementations performed in Country B, without violating sovereign data controls of either Country A or Country B.
Subsequently, ML model 409B is exported to Country C, where further training is performed using training data 403C and incremental machine learning algorithm 402C to produce ML model 409C (ver3). Similar to the above example, ML model 409C represents a third version of original ML model 409A, that now has the benefit of training performed in all sovereign regions, e.g., Country A, Country B, and Country C, without transmitting data sets between them.
As illustrated by example network topology 500, coordination server 502 is communicatively coupled to each of the plurality of agents 504. In this example, agent 504A resides in a first sovereign region (e.g., Country A), agent 504B resides in a second sovereign region (e.g., Country B), and agent 504C resides in a third sovereign region (e.g., Country C). It is understood that the various agents 504 can be one or more servers/systems configured for communicating over a network, such as a local area network (LAN), a wide-area network (WAN), or a network of networks, such as the Internet.
Agents 504 are each configured to facilitate the transfer of ML models 506 to other sovereign areas, via coordination server 502. Although topology 500 illustrates agents 504 and coordination server 502 as being in different geographic/sovereign regions, it is understood that agents 504 can reside outside of the sovereign regions they serve, and/or can share a common region with coordination server 502. However, in some preferred embodiments, agents 504 are proximately located to the ML models 506, and coordination server 502 resides in a central location proximally located to each of the regions, e.g., Country A, Country B, and Country C.
In practice, agent 504A can be configured to provide ML model 510A (v1) to coordination server 502, for example, after ML model 510A is generated through initial training of ML algorithm 506A performed using training set 508A, and incremental training using new data 512A. As in the example discussed with respect to
After transfer to coordination server 502, ML Model 510A is transferred to Country B via agent 504B, and trained as ML model 506B using training set 508B. The result of additional training using training set 508B produces ML model 510B (v2). In turn, ML model 510B (v2) is provided back to coordination server 502, via agent 504B. Again, the transfer of data necessary to move ML model 510B does not necessitate the transfer of any data in either training set 508B, or new data 512B.
After transfer to coordination server 502, ML model 510B is then transferred to Country C, via agent 504C, and trained as ML model 508C using training data set 508C. The result of additional training using training set 508C produces ML model 510C. As discussed above, model 510C (v3) can be further trained using an incremental machine learning technique, for example, as new data 512C are processed. In some implementations, the latest updated version of the ML model again be provided to the first sovereign region, e.g., for further training using training data and/or new data originating for that region. In the example of topology 500, ML model 510C (v3) can again be provided back to Country A via coordination server 502.
In the illustrated example, AgentA 604 and AgentB 606 first register with server 602 (e.g., steps 608A and 608B). After registration, server 602 provides training instruction 610 to AgentA 604, for example, to instruct AgentA 604 to begin training an associated ML model (v1). After ML model (v1) has been trained by AgentA 604, the model is then communicated to server 602 (step 612). Subsequently, ML model (v1) is transferred from server 602 to AgentB 606 (step 614).
Server 602 instructs AgentB 606 to perform further training on ML model (v1) (step 616). Similar to the examples provided above, subsequent training performed on ML model (v1) by AgentB 606 is done using data resident to a sovereign region of AgentB 606. In this manner, the deployment of ML model (v1) into the region of AgentB 606 can benefit from training performed in a sovereign region associated with AgentA 604, without the need to export training data from the region associated with AgentA 604 to AgentB 606.
The result of additional training that is performed by AgentB 606 on ML model (v1), is an updated version of the ML model, e.g., version 2 (i.e., v2), which is then provided by AgentB 606 back to server 602 (step 618). Subsequently, ML model (v2) is transferred from server 602 back to AgentA 604 (step 620). As in the transfer of ML model (v1) from AgentA 604 to AgentB 606, the transfer of ML model (v2) back to AgentA 604 is not necessitate the transfer of any data that may be subject to export controls.
After receiving ML model (v2), AgentA 604 begins additional training upon receipt of a new training command from server 602 (step 622). As illustrated in the foregoing examples, ML model (v3) can then be provided to one or more other sovereign regions without the export of any user data. As such, ML model (v3) can benefit from training performed at multiple various sovereign regions, without violation of sovereign data controls.
Although the timing diagram of
Because the first machine learning model only contains representing relationships between data set that may or may not be subject to export control, the information comprising the actual learning model does not include information/data that is subject to export controls. By way of example, the first machine learning model can be based on user data associated with a churn rate for particular service (see
In step 704, the first machine learning model is sent to a second coordination agent in a second sovereign region. In some aspects, the second sovereign region is different from the first sovereign region. By way of example, the first sovereign region can represent a particular country (e.g., Country A) that is subject to data sovereignty rules consistent with Country A′s legal jurisdiction. In contrast, the second sovereign region can represent a different country (e.g., Country B), that is subject to data sovereignty rules consistent with Country B′s legal jurisdiction.
In step, 706, a second machine learning model is received (e.g., by the coordination server) from the second coordination agent. The second machine learning model is based on updates to the first machine learning model using a second training data set corresponding with the second sovereign region.
In some aspects, the second machine learning model can be transferred to a third coordination agent located in a third sovereign region, for example, wherein the third sovereign region is different from each of the first sovereign region and the second sovereign region.
Network device 810 includes a master central processing unit (CPU) 862, interfaces 868, and bus 815 (e.g., a PCI bus). When acting under the control of appropriate software and/or firmware, CPU 862 is responsible for executing packet management, error detection, and/or routing functions. CPU 862 preferably accomplishes all these functions under the control of software including an operating system and any appropriate applications software. CPU 862 can include one or more processors 863 such as a processor from the Motorola family of microprocessors or the MIPS family of microprocessors. In an alternative embodiment, processor 863 is specially designed hardware for controlling the operations of router 810. In a specific embodiment, a memory 861 (such as non-volatile RAM and/or ROM) also forms part of CPU 862. However, there are many different ways in which memory could be coupled to the system.
Interfaces 868 can be provided as interface cards (sometimes referred to as “line cards”). Generally, they control the sending and receiving of data packets over the network and sometimes support other peripherals used with the router 810. Among the interfaces that can be provided are Ethernet interfaces, frame relay interfaces, cable interfaces, DSL interfaces, token ring interfaces, and the like. In addition, various very high-speed interfaces can be provided such as fast token ring interfaces, wireless interfaces, Ethernet interfaces, Gigabit Ethernet interfaces, ATM interfaces, HSSI interfaces, POS interfaces, FDDI interfaces and the like. Generally, these interfaces may include ports appropriate for communication with the appropriate media. In some cases, they may also include an independent processor and, in some instances, volatile RAM. The independent processors may control such communications intensive tasks as packet switching, media control and management. By providing separate processors for the communications intensive tasks, these interfaces allow the master microprocessor 862 to efficiently perform routing computations, network diagnostics, security functions, etc.
Although the system shown in
Regardless of the network device's configuration, it may employ one or more memories or memory modules (including memory 861) configured to store program instructions for the general-purpose network operations and mechanisms for roaming, route optimization and routing functions described herein. The program instructions may control the operation of an operating system and/or one or more applications, for example. The memory or memories may also be configured to store tables such as mobility binding, registration, and association tables, etc.
Memory 915 can include multiple different types of memory with different performance characteristics. The processor 910 can include any general purpose processor and a hardware module or software module, such as module 1932, module 2934, and module 3936 stored in storage device 930, configured to control the processor 910 as well as a special-purpose processor where software instructions are incorporated into the actual processor design. The processor 910 may essentially be a completely self-contained computing system, containing multiple cores or processors, a bus, memory controller, cache, etc. A multi-core processor can be symmetric or asymmetric.
To enable user interaction with the computing device 900, an input device 945 can represent any number of input mechanisms, such as a microphone for speech, a touch-sensitive screen for gesture or graphical input, keyboard, mouse, motion input, speech and so forth. An output device 935 can also be one or more of a number of output mechanisms known to those of skill in the art. In some instances, multimodal systems can enable a user to provide multiple types of input to communicate with the computing device 900. The communications interface 940 can generally govern and manage the user input and system output. There is no restriction on operating on any particular hardware arrangement and therefore the basic features here may easily be substituted for improved hardware or firmware arrangements as they are developed.
Storage device 930 is a non-volatile memory and can be a hard disk or other types of computer readable media which can store data that are accessible by a computer, such as magnetic cassettes, flash memory cards, solid state memory devices, digital versatile disks, cartridges, random access memories (RAMs) 925, read only memory (ROM) 920, and hybrids thereof.
The storage device 930 can include software modules 932, 934, 936 for controlling the processor 910. Other hardware or software modules are contemplated. The storage device 930 can be connected to the system bus 905. In one aspect, a hardware module that performs a particular function can include the software component stored in a computer-readable medium in connection with the necessary hardware components, such as the processor 910, bus 905, display 935, and so forth, to carry out the function.
Chipset 960 can also interface with one or more communication interfaces 990 that can have different physical interfaces. Such communication interfaces can include interfaces for wired and wireless local area networks, for broadband wireless networks, as well as personal area networks. Some applications of the methods for generating, displaying, and using the GUI disclosed herein can include receiving ordered datasets over the physical interface or be generated by the machine itself by processor 955 analyzing data stored in storage 970 or 975. Further, the machine can receive inputs from a user via user interface components 985 and execute appropriate functions, such as browsing functions by interpreting these inputs using processor 955.
It can be appreciated that example systems 900 and 950 can have more than one processor 910 or be part of a group or cluster of computing devices networked together to provide greater processing capability.
Although the exemplary embodiment described herein employs storage device 460, it should be appreciated by those skilled in the art that other types of computer readable media which can store data that are accessible by a computer, such as magnetic cassettes, flash memory cards, digital versatile disks, cartridges, random access memories (RAMs) 450, read only memory (ROM) 440, a cable or wireless signal containing a bit stream and the like, may also be used in the exemplary operating environment. Non-transitory computer-readable storage media expressly exclude media such as energy, carrier signals, electromagnetic waves, and transitory signals per se.
To enable user interaction with the computing device 400, an input device 490 represents any number of input mechanisms, such as a microphone for speech, a touch-sensitive screen for gesture or graphical input, keyboard, mouse, motion input, speech and so forth. An output device 470 can also be one or more of a number of output mechanisms known to those of skill in the art. In some instances, multimodal systems enable a user to provide multiple types of input to communicate with the computing device 400. The communications interface 480 generally governs and manages the user input and system output. There is no restriction on operating on any particular hardware arrangement and therefore the basic features here may easily be substituted for improved hardware or firmware arrangements as they are developed.
For clarity of explanation, the illustrative system embodiment is presented as including individual functional blocks including functional blocks labeled as a “processor” or processor 420. The functions these blocks represent may be provided through the use of either shared or dedicated hardware, including, but not limited to, hardware capable of executing software and hardware, such as a processor 420, that is purpose-built to operate as an equivalent to software executing on a general purpose processor. For example, the functions of one or more processors may be provided by a single shared processor or multiple processors. (Use of the term “processor” should not be construed to refer exclusively to hardware capable of executing software.) Illustrative embodiments may include microprocessor and/or digital signal processor (DSP) hardware, read-only memory (ROM) 440 for storing software performing the operations discussed below, and random access memory (RAM) 450 for storing results. Very large scale integration (VLSI) hardware embodiments, as well as custom VLSI circuitry in combination with a general purpose DSP circuit, may also be provided.
The logical operations of the various embodiments are implemented as: (1) a sequence of computer implemented steps, operations, or procedures running on a programmable circuit within a general use computer, (2) a sequence of computer implemented steps, operations, or procedures running on a specific-use programmable circuit; and/or (3) interconnected machine modules or program engines within the programmable circuits. The system 400 can practice all or part of the recited methods, can be a part of the recited systems, and/or can operate according to instructions in the recited non-transitory computer-readable storage media. Such logical operations can be implemented as modules configured to control the processor 420 to perform particular functions according to the programming of the module.
For example,
It is understood that any specific order or hierarchy of steps in the processes disclosed is an illustration of exemplary approaches. Based upon design preferences, it is understood that the specific order or hierarchy of steps in the processes may be rearranged, or that only a portion of the illustrated steps be performed. Some of the steps may be performed simultaneously. For example, in certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.
The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects. Thus, the claims are not intended to be limited to the aspects shown herein, but are to be accorded the full scope consistent with the language claims, wherein reference to an element in the singular is not intended to mean “one and only one” unless specifically so stated, but rather “one or more.”
A phrase such as an “aspect” does not imply that such aspect is essential to the subject technology or that such aspect applies to all configurations of the subject technology. A disclosure relating to an aspect may apply to all configurations, or one or more configurations. A phrase such as an aspect may refer to one or more aspects and vice versa. A phrase such as a “configuration” does not imply that such configuration is essential to the subject technology or that such configuration applies to all configurations of the subject technology. A disclosure relating to a configuration may apply to all configurations, or one or more configurations. A phrase such as a configuration may refer to one or more configurations and vice versa.
The word “exemplary” is used herein to mean “serving as an example or illustration.” Any aspect or design described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects or designs.
Number | Name | Date | Kind |
---|---|---|---|
5185848 | Aritsuka et al. | Feb 1993 | A |
5917537 | Lightfoot et al. | Jun 1999 | A |
5970064 | Clark et al. | Oct 1999 | A |
6023606 | Monte et al. | Feb 2000 | A |
6115393 | Engel et al. | Sep 2000 | A |
6298351 | Castelli et al. | Oct 2001 | B1 |
6456624 | Eccles et al. | Sep 2002 | B1 |
6597684 | Gulati et al. | Jul 2003 | B1 |
6697325 | Cain | Feb 2004 | B1 |
6721899 | Narvaez-Guarnieri et al. | Apr 2004 | B1 |
6816464 | Scott et al. | Nov 2004 | B1 |
6894714 | Gutta et al. | May 2005 | B2 |
6954617 | daCosta | Oct 2005 | B2 |
7185077 | O'Toole et al. | Feb 2007 | B1 |
7453864 | Kennedy et al. | Nov 2008 | B2 |
7496650 | Previdi et al. | Feb 2009 | B1 |
7826372 | Mabe et al. | Nov 2010 | B1 |
7826400 | Sackauchi | Nov 2010 | B2 |
7848340 | Sackauchi et al. | Dec 2010 | B2 |
7995464 | Croak et al. | Aug 2011 | B1 |
8059557 | Sigg et al. | Nov 2011 | B1 |
8063929 | Kurtz et al. | Nov 2011 | B2 |
8154583 | Kurtz et al. | Apr 2012 | B2 |
8274893 | Bansal et al. | Sep 2012 | B2 |
8385355 | Figueira et al. | Feb 2013 | B1 |
8489765 | Vasseur et al. | Jul 2013 | B2 |
8620840 | Newnham et al. | Dec 2013 | B2 |
8630291 | Shaffer et al. | Jan 2014 | B2 |
8634314 | Banka et al. | Jan 2014 | B2 |
8638778 | Lee et al. | Jan 2014 | B2 |
8707194 | Jenkins et al. | Apr 2014 | B1 |
8767716 | Trabelsi et al. | Jul 2014 | B2 |
8774164 | Klein et al. | Jul 2014 | B2 |
8842161 | Feng et al. | Sep 2014 | B2 |
8856584 | Matsubara | Oct 2014 | B2 |
8862522 | Jaiswal et al. | Oct 2014 | B1 |
8880477 | Barker et al. | Nov 2014 | B2 |
8942085 | Pani et al. | Jan 2015 | B1 |
8948054 | Kreeger et al. | Feb 2015 | B2 |
8982707 | Moreno et al. | Mar 2015 | B2 |
9137119 | Yang et al. | Sep 2015 | B2 |
9197553 | Jain et al. | Nov 2015 | B2 |
9324022 | Williams, Jr. et al. | Apr 2016 | B2 |
9338065 | Vasseur et al. | May 2016 | B2 |
9338084 | Badoni | May 2016 | B2 |
9374294 | Pani | Jun 2016 | B1 |
9419811 | Dong et al. | Aug 2016 | B2 |
9544224 | Chu et al. | Jan 2017 | B2 |
9553799 | Tarricone et al. | Jan 2017 | B2 |
9558451 | Nilsson et al. | Jan 2017 | B2 |
9596099 | Yang et al. | Mar 2017 | B2 |
9614756 | Joshi | Apr 2017 | B2 |
9654385 | Chu et al. | May 2017 | B2 |
20020061001 | Garcia-Luna-Aceves et al. | May 2002 | A1 |
20020101505 | Gutta et al. | Aug 2002 | A1 |
20020105904 | Hauser et al. | Aug 2002 | A1 |
20020116154 | Nowak et al. | Aug 2002 | A1 |
20020159386 | Grosdidier et al. | Oct 2002 | A1 |
20030005149 | Haas et al. | Jan 2003 | A1 |
20030061340 | Sun et al. | Mar 2003 | A1 |
20030067912 | Mead et al. | Apr 2003 | A1 |
20030091052 | Pate et al. | May 2003 | A1 |
20030117992 | Kim et al. | Jun 2003 | A1 |
20030133417 | Badt, Jr. | Jul 2003 | A1 |
20030187800 | Moore et al. | Oct 2003 | A1 |
20030225549 | Shay et al. | Dec 2003 | A1 |
20040153563 | Shay et al. | Aug 2004 | A1 |
20040218525 | Elie-Dit-Cosaque et al. | Nov 2004 | A1 |
20050111487 | Matta et al. | May 2005 | A1 |
20050114532 | Chess et al. | May 2005 | A1 |
20050143979 | Lee et al. | Jun 2005 | A1 |
20050286711 | Lee et al. | Dec 2005 | A1 |
20060072471 | Shiozawa | Apr 2006 | A1 |
20060083193 | Womack et al. | Apr 2006 | A1 |
20060116146 | Herrod et al. | Jun 2006 | A1 |
20060133404 | Zuniga et al. | Jun 2006 | A1 |
20060274647 | Wang et al. | Dec 2006 | A1 |
20070047707 | Mayer et al. | Mar 2007 | A1 |
20070071030 | Lee | Mar 2007 | A1 |
20070083650 | Collomb et al. | Apr 2007 | A1 |
20070120966 | Murai | May 2007 | A1 |
20070149249 | Chen et al. | Jun 2007 | A1 |
20070192065 | Riggs et al. | Aug 2007 | A1 |
20070208590 | Dorricott et al. | Sep 2007 | A1 |
20080049622 | Previdi et al. | Feb 2008 | A1 |
20080089246 | Ghanwani et al. | Apr 2008 | A1 |
20080140817 | Agarwal et al. | Jun 2008 | A1 |
20080159151 | Datz et al. | Jul 2008 | A1 |
20080181259 | Andreev et al. | Jul 2008 | A1 |
20080192651 | Gibbings | Aug 2008 | A1 |
20080293353 | Mody et al. | Nov 2008 | A1 |
20090003232 | Vaswani et al. | Jan 2009 | A1 |
20090010264 | Zhang | Jan 2009 | A1 |
20090073988 | Ghodrat et al. | Mar 2009 | A1 |
20090129316 | Ramanathan et al. | May 2009 | A1 |
20090147714 | Jain et al. | Jun 2009 | A1 |
20090147737 | Tacconi et al. | Jun 2009 | A1 |
20090168653 | St. Pierre et al. | Jul 2009 | A1 |
20090271467 | Boers et al. | Oct 2009 | A1 |
20090303908 | Deb et al. | Dec 2009 | A1 |
20100046504 | Hill | Feb 2010 | A1 |
20100165863 | Nakata | Jul 2010 | A1 |
20110082596 | Meagher et al. | Apr 2011 | A1 |
20110116389 | Tao et al. | May 2011 | A1 |
20110149759 | Jollota | Jun 2011 | A1 |
20110228696 | Agarwal et al. | Sep 2011 | A1 |
20110255570 | Fujiwara | Oct 2011 | A1 |
20110267962 | J S A et al. | Nov 2011 | A1 |
20110274283 | Athanas | Nov 2011 | A1 |
20120009890 | Curcio et al. | Jan 2012 | A1 |
20120075999 | Ko et al. | Mar 2012 | A1 |
20120163177 | Vaswani et al. | Jun 2012 | A1 |
20120192075 | Ebtekar et al. | Jul 2012 | A1 |
20120213062 | Liang et al. | Aug 2012 | A1 |
20120213124 | Vasseur et al. | Aug 2012 | A1 |
20120307629 | Vasseur et al. | Dec 2012 | A1 |
20120321058 | Eng et al. | Dec 2012 | A1 |
20130003542 | Catovic et al. | Jan 2013 | A1 |
20130010610 | Karthikeyan et al. | Jan 2013 | A1 |
20130028073 | Tatipamula et al. | Jan 2013 | A1 |
20130070755 | Trabelsi et al. | Mar 2013 | A1 |
20130094647 | Mauro et al. | Apr 2013 | A1 |
20130128720 | Kim et al. | May 2013 | A1 |
20130177305 | Prakash et al. | Jul 2013 | A1 |
20130250754 | Vasseur et al. | Sep 2013 | A1 |
20130275589 | Karthikeyan et al. | Oct 2013 | A1 |
20130311673 | Karthikeyan et al. | Nov 2013 | A1 |
20140049595 | Feng et al. | Feb 2014 | A1 |
20140126423 | Vasseur et al. | May 2014 | A1 |
20140133327 | Miyauchi | May 2014 | A1 |
20140204759 | Guo et al. | Jul 2014 | A1 |
20140207945 | Galloway et al. | Jul 2014 | A1 |
20140215077 | Soudan et al. | Jul 2014 | A1 |
20140219103 | Vasseur et al. | Aug 2014 | A1 |
20140293955 | Keerthi | Oct 2014 | A1 |
20140337840 | Hyde et al. | Nov 2014 | A1 |
20150023174 | Dasgupta et al. | Jan 2015 | A1 |
20150052095 | Yang et al. | Feb 2015 | A1 |
20150142702 | Nilsson et al. | May 2015 | A1 |
20150215365 | Shaffer et al. | Jul 2015 | A1 |
20150242760 | Miao | Aug 2015 | A1 |
20150324689 | Wierzynski et al. | Nov 2015 | A1 |
20150358248 | Saha et al. | Dec 2015 | A1 |
20160037304 | Dunkin et al. | Feb 2016 | A1 |
20160105345 | Kim et al. | Apr 2016 | A1 |
20160203404 | Cherkasova et al. | Jul 2016 | A1 |
20160315802 | Wei et al. | Oct 2016 | A1 |
20160335111 | Bruun et al. | Nov 2016 | A1 |
20170109322 | McMahan et al. | Apr 2017 | A1 |
20170150399 | Kedalagudde et al. | May 2017 | A1 |
20170228251 | Yang et al. | Aug 2017 | A1 |
20170289033 | Singh et al. | Oct 2017 | A1 |
20170347308 | Chou et al. | Nov 2017 | A1 |
20170353361 | Chopra et al. | Dec 2017 | A1 |
20180013656 | Chen | Jan 2018 | A1 |
Number | Date | Country |
---|---|---|
102004671 | Mar 2013 | CN |
Entry |
---|
Aluisio et al., “Evaluating Progression of Alzheimer's Disease by Regression and Classification Methods in a Narrative Language Test in Portuguese”, Jun. 21, 2016, Lecture Notes in Computer Science, vol. 9727, pp. 109-114, (Year: 2016). |
Nadkarni et al., “Preventing Accidental Data Disclosure in Modern Operating Systems”, Nov. 8, 2013, Ccs'13, pp. 1029-1041 (Year: 2013). |
International Search Report and Written Opinion from the International Searching Authority, dated Jul. 20, 2018, 10 pages, for the corresponding International Application No. PCT/US2018/028057. |
Akkaya, Kemal, et al., “A survey on routing protocols for wireless sensor networks” Abtract, 1 page, Ad Hoc Networks, May 2005. |
Alsheikh, Mohammad Abu, et al., “Machine Learning in Wireless Sensor Networks: Algorithms, Strategies, and Applications,” Mar. 19, 2015, 23 pages. |
Author Unknown, “White Paper on Service Enabler Virtualization,” Draft dated Nov. 9, 2015, 26 pages, Open Mobile Alliance (OMA), San Diego, CA, USA. |
Baccour, Nouha, et al., “Radio Link Quality Estimation in Wireless Sensor Networks: A Survey,” 2011, 35 pages. |
Fan, NA, “Learning Nonlinear Distance Functions Using Neural Network for Regression with Application to Robust Human Age Estimation,” Abstract, 1 page, IEEE International Conference on Computer Vision (ICCV), Nov. 2011, Institute of Electrical and Electronics Engineers, Barcelona, Spain. |
Flushing, Eduardo Feo, et al.: “A mobility-assisted protocol for supervised learning of link quality estimates in wireless networks,” Feb. 2012, 8 pages. |
Fortunato, Santo, “Community Detection in Graphs”, arXiv:0906.0612v2 [physics.soc-ph]; Physics Reports 486, 75-174, Jan. 25, 2010, 103 pages. |
Godsill, Simon, et al., “Detection and suppression of keyboard transient auxiliary keybed microphone,” Abstract, 1 page, 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Apr. 19-24, 2015, Brisbane, QLD, Australia (Abstract available at http://ieeexplore.ieee.org/document/7177995/, downloaded on Feb. 28, 2018. |
Hershey, Shawn, et al., “CNN Architectures for Large-Scale Audio Classification,” Jan. 10, 2017, 5 pages, Google, Inc., New York, NY, and Mountain View, CA, USA. |
Hradis, Michael, et al., “Voice activity detection from gaze in video mediated communication,” ACM, Mar. 28-30, 2012 http://medusa.fit.vutbr.cz/TA2/TA2, 4 pages. |
Hui, J., et al., “An IPv6 Routing Header for Source Routes with the Routing Protocol for Low-Power and Lossy Networks (RPL)”, Request for Comments 6554, Mar. 2012, 12 pages, Internet Engineering Task Force Trust. |
Kuklinski, Slawomir, et al., “Design Principles of Generalized Network Orchestrators,” 2016 IEEE International Conference on Communications Workshops (ICC), May 23, 2016, pp. 430-435. |
Liu, Tao, et al., “Data-driven Link Quality Prediction Using Link Features,” ACM Transactions on Sensor Networks, Feb. 2014, 35 pages. |
McKenna, Shannon, et al., “Acoustic Event Detection Using Machine Learning: Identifying Train Events,” Sep. 2017, pp. 1-5, http://cs229.stanford.edu/proj2012/McKennaMcLaren-AcousticEventDetectionUsingMachineLearningIdentifyingTrainEvents.pdf, downloaded on Feb. 28, 2018. |
Newman, M. E. J., “Analysis of weighted networks,” Phys. Rev. E 70, 056131, Jul. 20, 2004, 9 pages, http://arxiv.org/pdf/condmat/0407503.pdf. |
Newman, W. E. J., “Modularity and Community Structure in Networks”, Proceedings of the National Academy of Sciences of the United States of America, Jun. 2006, vol. 103, No. 23, pp. 8577-8582, PNAS, Washington, DC. |
Piczak, Karol J., “Environmental Sound Classification With Convolutional Neutral Networks,” 2015 IEEE International Workshop on Machine Learning for Signal Processing, Sep. 17-20, 2015, Boston, USA. |
Raghavendra, Kulkami V., et al., “Computational Intelligence in Wireless Sensor Networks: A Survey,” Abstract, 1 page, IEEE Communications Surveys & Tutorials, May 27, 2010. |
Salamon, Justin, et al., “Deep Convolutional Neutral Networks and Data Augmentation for Environmental Sound Classification,” IEEE Signal Processing Letters, Accepted Nov. 2016, 5 pages. |
Siddiky, Feroz Ahmed, et al., “An Efficient Approach to Rotation Invariant Face Detection Using PCA, Generalized Regression Neural Network and Mahalanobis Distance by Reducing Search Space,” Abstract, 1 page, 10th International Conference on Computer and Information Technology, Dec. 2007, Dhaka, Bangladesh. |
Singh, Shio Kumar, et al., “Routing Protocols in Wireless Sensor Networks—A Survey,” International Journal of Computer Science & Engineering Survey (IJCSES) vol. 1, No. 2, Nov. 2010, pp. 63-83. |
Tang, Pengcheng, et al., “Efficient Auto-scaling Approach in the Telco Cloud using Self-learning Algorithm,” 2015 IEEE Global Communications Conference (Globecom), Dec. 6, 2015, pp. 1-6. |
Tang, Yongning, et al., “Automatic belief network modeling via policy interference for SDN fault localization,” Journal of Internet Services and Applications, Jan. 20, 2016, pp. 1-13, Biomed Central Ltd., London, UK. |
Ting, Jo-Anne, et al., “Variational Bayesian Least Squares: An Application to Brain-Machine Interface Data,” Neural Networks, vol. 21, Issue 8, Oct. 2008, pp. 1112-1131, Elsevier. |
Tsang, Yolanda, et al., “Network Radar: Tomography from Round Trip Time Measurements,” ICM'04, Oct. 25-27, 2004, Sicily, Italy. |
Vasseur, JP., et al., “Routing Metrics Used for Path Calculation in Low-Power and Lossy Networks,” Request for Comments 6551, Mar. 2012, 30 pages, Internet Engineering Task Force Trust. |
Winter, T., et al., “RPL: IPv6 Routing Protocol for Low-Power and Lossy Networks,” Request for Comments 6550, Mar. 2012, 157 pages, Internet Engineering Task Force Trust. |
Zhang, Xiaoju, et al., “Dilated convolution neutral network with LeakyReLU for environmental sound classification,” Abstract, 1 page, 2017 22nd International Conference on Digital Signal Processing (DSP), Aug. 23-25, 2017, London, U.K. |
Zinkevich, Martin, et al. “Parallelized Stochastic Gradient Descent,” 2010, 37 pages. |
“VXLAN Network with MP-BGP EVPN Control Plane Design Guide,” © Cisco Systems, Inc., Mar. 21, 2016, 44 pages. |
Number | Date | Country | |
---|---|---|---|
20180314981 A1 | Nov 2018 | US |