Distributed collaborative machine learning enables machine learning across a distributed data environment of client nodes without the requirement of transferring unprotected data from the client nodes to a central node or server. This feature increases the privacy and security for the data being analyzed. In addition, the party analyzing the results of the data processing at the central node never has access to the raw data at the client nodes; instead, only the smashed data (the outputs of the final/cut layer of the local part of the model) are transferred to the central node during the training process, and the local part of the trained model is passed for inference.
One approach to distributed collaborative machine learning is federated learning. Using federated learning, the central node transfers a full machine learning model to each of the distributed client nodes containing their local data, then later aggregates the locally trained full machine learning models from each client node to form a global model at the central node. This allows for parallel model training, increasing the speed of operation of the system. A disadvantage of federated learning, however, is that each client node needs to run the full machine learning model. The client nodes in some real-world applications may not have sufficient computational capacity to process the full machine learning model, which may be particularly difficult if the machine learning models are deep-learning models. Another disadvantage is that transferring the full model might be communicationally expensive. There is also a privacy concern in giving each of the client nodes the full machine learning model.
An alternative to federated learning is split learning. Split learning splits the full machine learning model into multiple smaller portions and trains them separately. Assigning only a part of the network to train at the client nodes reduces processing load at each client node. Communication load is also improved, because only smashed data is transferred to the central node. This also improves privacy by preventing the client nodes from having access to the full machine learning model known to the central node or server.
Differential privacy is a method of protecting data privacy based on the principle that privacy is a property of a computation over a database or silo, as opposed to the syntactic qualities of the database itself. Fundamentally, a computation is considered differentially private if it produces approximately the same result when applied to two databases that differ only by the presence or absence of a single record in the data. Differential privacy is powerful because of the mathematical and quantifiable guarantees that it provides regarding the re-identifiability of the underlying data. Differential privacy differs from historical approaches because of its ability to quantify the mathematical risk of de-identification using an epsilon value, which measures the privacy “cost” of a query. Differential privacy makes it possible to keep track of the cumulative privacy risk to a dataset over many analyses and queries.
A vertically partitioned distributed data setting is one in which various databases or silos hold a number of different columns of data relating to the same individuals or entities. The owners of the data silos may wish to collaborate to use the distributed data to train a machine learning model or deep neural network to predict or classify some outcome under the constraint that the original data cannot be disclosed or exported from its original source. In addition, the collaborating silos may have varying degrees of risk tolerance with respect to the privacy constraints of the contributing data silos. It would be desirable therefore to develop a system for applying a machine learning model to a vertically partitioned distributed data network in order to maintain privacy using differential privacy techniques while also allowing for the various solutions afforded by machine learning processing.
Research papers on differential privacy and split learning include the following:
References mentioned in this background section are not admitted to be prior art with respect to the present invention.
In a machine according to the present invention, a machine-learning system or deep neural network is split into a number of “worker” modules and a single “server” module. Worker modules are independent neural networks initialized locally on each data silo. A server network receives the last layer output (referred to herein as “smashed data”) from each worker module during training, aggregates the result, and feeds into its own local neural network. The server then calculates an error, with respect to the prediction or classification task at hand, and instructs the sub-modules to update their model parameters using gradients to reduce the observed error. This process continues until the error has decreased to an acceptable level. A parameterized level of noise is applied to the worker gradients between each training iteration, resulting in a differentially private model. Each worker may parameterize the weighting of the amount of noise applied to its local neural network module in accordance with its independent privacy requirements. Thus the epsilon values (the measure of privacy loss for a differential change in data) at each worker are independent. The invention in certain embodiments thus represents the introduction of differential privacy in a vertically partitioned data environment in which different silos with independent privacy requirements hold different sets of features/columns for the same dataset.
One application of the invention in its various embodiments is to allow collaborating parties to train a single deep neural network with privacy guarantees. Due to the modular nature of the neural network topology, one may use trained “worker” neural network modules as privacy-preserving feature generators, which could be used as input to other machine learning methods. The invention thus allows for inter-organization and inter-line-of-business collaborative machine learning in regulated and constrained data environments where each silo holds varying sets of features.
These and other features, objects and advantages of the present invention will become better understood from a consideration of the following detailed description of the preferred embodiments and appended claims in conjunction with the drawings as described following:
Before the present invention is described in further detail, it should be understood that the invention is not limited to the particular embodiments described, and that the terms used in describing the particular embodiments are for the purpose of describing those particular embodiments only, and are not intended to be limiting, since the scope of the present invention will be limited only by the claims.
As shown in
A problem with building a model with a neural network is whether one may be sure that the model has not memorized the underlying data, thereby compromising privacy. Known privacy attacks, such as membership interference attacks, may be performed by querying with specific input and observing output, allowing the attacker to discern privacy data even though there is no direct access to the data silo. In the network of
The process for applying machine learning using the system of
As shown in the swim lane diagram of
A potential problem in a system of this type is data leakage when applying noise only at the back-propagation phase as shown in
Traditional split neural networks are vulnerable to model inversion attacks, as noted above. To reduce leakage, the distance correlation between model inputs and the cut layer activations (i.e., the raw data and the smashed data) may be minimized. This means that the raw data and the smashed data are maximally dissimilar. This approach has been shown to have little if any effect on model accuracy.
The systems and methods described herein may in various embodiments be implemented by any combination of hardware and software. For example, in one embodiment, the systems and methods may be implemented by a computer system or a collection of computer systems, each of which includes one or more processors executing program instructions stored on a computer-readable storage medium coupled to the processors. The program instructions may implement the functionality described herein. The various systems and displays as illustrated in the figures and described herein represent example implementations. The order of any method may be changed, and various elements may be added, modified, or omitted.
A computing system or computing device as described herein may implement a hardware portion of a cloud computing system or non-cloud computing system, as forming parts of the various implementations of the present invention. The computer system may be any of various types of devices, including, but not limited to, a commodity server, personal computer system, desktop computer, laptop or notebook computer, mainframe computer system, handheld computer, workstation, network computer, a consumer device, application server, storage device, telephone, mobile telephone, or in general any type of computing node, compute node, compute device, and/or computing device. The computing system includes one or more processors (any of which may include multiple processing cores, which may be single or multi-threaded) coupled to a system memory via an input/output (I/O) interface. The computer system further may include a network interface coupled to the I/O interface.
In various embodiments, the computer system may be a single processor system including one processor, or a multiprocessor system including multiple processors. The processors may be any suitable processors capable of executing computing instructions. For example, in various embodiments, they may be general-purpose or embedded processors implementing any of a variety of instruction set architectures. In multiprocessor systems, each of the processors may commonly, but not necessarily, implement the same instruction set. The computer system also includes one or more network communication devices (e.g., a network interface) for communicating with other systems and/or components over a communications network, such as a local area network, wide area network, or the Internet. For example, a client application executing on the computing device may use a network interface to communicate with a server application executing on a single server or on a cluster of servers that implement one or more of the components of the systems described herein in a cloud computing or non-cloud computing environment as implemented in various sub-systems. In another example, an instance of a server application executing on a computer system may use a network interface to communicate with other instances of an application that may be implemented on other computer systems.
The computing device also includes one or more persistent storage devices and/or one or more I/O devices. In various embodiments, the persistent storage devices may correspond to disk drives, tape drives, solid state memory, other mass storage devices, or any other persistent storage devices. The computer system (or a distributed application or operating system operating thereon) may store instructions and/or data in persistent storage devices, as desired, and may retrieve the stored instruction and/or data as needed. For example, in some embodiments, the computer system may implement one or more nodes of a control plane or control system, and persistent storage may include the SSDs attached to that server node. Multiple computer systems may share the same persistent storage devices or may share a pool of persistent storage devices, with the devices in the pool representing the same or different storage technologies.
The computer system includes one or more system memories that may store code/instructions and data accessible by the processor(s). The system's memory capabilities may include multiple levels of memory and memory caches in a system designed to swap information in memories based on access speed, for example. The interleaving and swapping may extend to persistent storage in a virtual memory implementation. The technologies used to implement the memories may include, by way of example, static random-access memory (RAM), dynamic RAM, read-only memory (ROM), non-volatile memory, or flash-type memory. As with persistent storage, multiple computer systems may share the same system memories or may share a pool of system memories. System memory or memories may contain program instructions that are executable by the processor(s) to implement the routines described herein. In various embodiments, program instructions may be encoded in binary, Assembly language, any interpreted language such as Java, compiled languages such as C/C++, or in any combination thereof; the particular languages given here are only examples. In some embodiments, program instructions may implement multiple separate clients, server nodes, and/or other components.
In some implementations, program instructions may include instructions executable to implement an operating system (not shown), which may be any of various operating systems, such as UNIX, LINUX, Solaris™, MacOS™, or Microsoft Windows™. Any or all of program instructions may be provided as a computer program product, or software, that may include a non-transitory computer-readable storage medium having stored thereon instructions, which may be used to program a computer system (or other electronic devices) to perform a process according to various implementations. A non-transitory computer-readable storage medium may include any mechanism for storing information in a form (e.g., software, processing application) readable by a machine (e.g., a computer). Generally speaking, a non-transitory computer-accessible medium may include computer-readable storage media or memory media such as magnetic or optical media, e.g., disk or DVD/CD-ROM coupled to the computer system via the I/O interface. A non-transitory computer-readable storage medium may also include any volatile or non-volatile media such as RAM or ROM that may be included in some embodiments of the computer system as system memory or another type of memory. In other implementations, program instructions may be communicated using optical, acoustical or other form of propagated signal (e.g., carrier waves, infrared signals, digital signals, etc.) conveyed via a communication medium such as a network and/or a wired or wireless link, such as may be implemented via a network interface. A network interface may be used to interface with other devices, which may include other computer systems or any type of external electronic device. In general, system memory, persistent storage, and/or remote storage accessible on other devices through a network may store data blocks, replicas of data blocks, metadata associated with data blocks and/or their state, database configuration information, and/or any other information usable in implementing the routines described herein.
In certain implementations, the I/O interface may coordinate I/O traffic between processors, system memory, and any peripheral devices in the system, including through a network interface or other peripheral interfaces. In some embodiments, the I/O interface may perform any necessary protocol, timing or other data transformations to convert data signals from one component (e.g., system memory) into a format suitable for use by another component (e.g., processors). In some embodiments, the I/O interface may include support for devices attached through various types of peripheral buses, such as a variant of the Peripheral Component Interconnect (PCI) bus standard or the Universal Serial Bus (USB) standard, for example. Also, in some embodiments, some or all of the functionality of the I/O interface, such as an interface to system memory, may be incorporated directly into the processor(s).
A network interface may allow data to be exchanged between a computer system and other devices attached to a network, such as other computer systems (which may implement one or more storage system server nodes, primary nodes, read-only node nodes, and/or clients of the database systems described herein), for example. In addition, the I/O interface may allow communication between the computer system and various I/O devices and/or remote storage. Input/output devices may, in some embodiments, include one or more display terminals, keyboards, keypads, touchpads, scanning devices, voice or optical recognition devices, or any other devices suitable for entering or retrieving data by one or more computer systems. These may connect directly to a particular computer system or generally connect to multiple computer systems in a cloud computing environment, grid computing environment, or other system involving multiple computer systems. Multiple input/output devices may be present in communication with the computer system or may be distributed on various nodes of a distributed system that includes the computer system. The user interfaces described herein may be visible to a user using various types of display screens, which may include CRT displays, LCD displays, LED displays, and other display technologies. In some implementations, the inputs may be received through the displays using touchscreen technologies, and in other implementations the inputs may be received through a keyboard, mouse, touchpad, or other input technologies, or any combination of these technologies.
In some embodiments, similar input/output devices may be separate from the computer system and may interact with one or more nodes of a distributed system that includes the computer system through a wired or wireless connection, such as over a network interface. The network interface may commonly support one or more wireless networking protocols (e.g., Wi-Fi/IEEE 802.11, or another wireless networking standard). The network interface may support communication via any suitable wired or wireless general data networks, such as other types of Ethernet networks, for example. Additionally, the network interface may support communication via telecommunications/telephony networks such as analog voice networks or digital fiber communications networks, via storage area networks such as Fibre Channel SANs, or via any other suitable type of network and/or protocol.
Any of the distributed system embodiments described herein, or any of their components, may be implemented as one or more network-based services in the cloud computing environment. For example, a read-write node and/or read-only nodes within the database tier of a database system may present database services and/or other types of data storage services that employ the distributed storage systems described herein to clients as network-based services. In some embodiments, a network-based service may be implemented by a software and/or hardware system designed to support interoperable machine-to-machine interaction over a network. A web service may have an interface described in a machine-processable format, such as the Web Services Description Language (WSDL). Other systems may interact with the network-based service in a manner prescribed by the description of the network-based service's interface. For example, the network-based service may define various operations that other systems may invoke, and may define a particular application programming interface (API) to which other systems may be expected to conform when requesting the various operations.
In various embodiments, a network-based service may be requested or invoked through the use of a message that includes parameters and/or data associated with the network-based services request. Such a message may be formatted according to a particular markup language such as Extensible Markup Language (XML), and/or may be encapsulated using a protocol such as Simple Object Access Protocol (SOAP). To perform a network-based services request, a network-based services client may assemble a message including the request and convey the message to an addressable endpoint (e.g., a Uniform Resource Locator (URL)) corresponding to the web service, using an Internet-based application layer transfer protocol such as Hypertext Transfer Protocol (HTTP). In some embodiments, network-based services may be implemented using Representational State Transfer (REST) techniques rather than message-based techniques. For example, a network-based service implemented according to a REST technique may be invoked through parameters included within an HTTP method such as PUT, GET, or DELETE.
Unless otherwise stated, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. Although any methods and materials similar or equivalent to those described herein can also be used in the practice or testing of the present invention, a limited number of the exemplary methods and materials are described herein. It will be apparent to those skilled in the art that many more modifications are possible without departing from the inventive concepts herein.
All terms used herein should be interpreted in the broadest possible manner consistent with the context. When a grouping is used herein, all individual members of the group and all combinations and subcombinations possible of the group are intended to be individually included. When a range is stated herein, the range is intended to include all subranges and individual points within the range. All references cited herein are hereby incorporated by reference to the extent that there is no inconsistency with the disclosure of this specification.
The present invention has been described with reference to certain preferred and alternative embodiments that are intended to be exemplary only and not limiting to the full scope of the present invention.
This application claims the benefit of U.S. provisional patent application No. 63/275,011, filed on Nov. 3, 2021. Such application is incorporated herein by reference in its entirety.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2022/048661 | 11/2/2022 | WO |
Number | Date | Country | |
---|---|---|---|
63275011 | Nov 2021 | US |