COMPUTER-IMPLEMENTED METHOD OF APPLYING A FIRST FUNCTION TO EACH DATA ELEMENT IN A DATA SET, AND A WORKER NODE FOR IMPLEMENTING THE SAME

Information

  • Patent Application
  • 20210004494
  • Publication Number
    20210004494
  • Date Filed
    December 13, 2018
    5 years ago
  • Date Published
    January 07, 2021
    3 years ago
Abstract
There is provided a computer-implemented method of applying a first function to each data element in a first data set, the method comprising (i) determining whether each data element in the first data set satisfies a criterion, wherein the criterion is satisfied only if the result of applying the first function to the data element is equal to the result of applying a second first data set satisfies a criterion function to the data element; (ii) forming a compressed data set comprising the data elements in the first data set that do not satisfy the criterion; (iii) applying the first function to 10 each data element in the compressed data set; and (iv) forming an output based on the results of step (iii); wherein steps (i)-(iv) are performed using multiparty computation, MPC, techniques. A corresponding system and worker node are also provided.
Description
FIELD OF THE INVENTION

The disclosure relates to the application of a first function to each data element in a data set, and in particular to a computer-implemented method and a worker node for applying a first function to each data element in a data set.


BACKGROUND OF THE INVENTION

In settings where sensitive information from multiple mutually distrusting parties needs to be processed, cryptography-based privacy-preserving techniques such as multiparty computation (MPC) can be used. In particular, when using MPC, sensitive data is “secret shared” between multiple parties so that no individual party can learn the data without the help of other parties. Using cryptographic protocols between these parties, it is possible to perform computations on such “secret shared” data. Although a wide range of primitive operations on secret shared data are available, not all traditional programming language constructs are available. For instance, it is not possible to have an “if” statement with a condition involving a sensitive variable, simply because no party in the system should know whether the condition holds. Hence, efficient methods to perform higher-level operations (e.g., sorting a list or finding its maximum) are needed that make use only of operations available on secret-shared data.


One common operation occurring in information processing is the “map” operation, where the same function f is applied to all elements in a data set.


SUMMARY OF THE INVENTION

One way to perform the “map” operation on secret-shared data, is to apply a function f under MPC to the secret shares of each data element in the data set. However, suppose a function f is to be mapped to a data set for which:

    • it is computationally expensive to compute function ƒ on input x using MPC;
    • there is criterion ϕ that is straightforward to check on input x such that, if it is true, ƒ(x)=g(x) where function g is straightforward to compute (e.g., it is a constant); and
    • it is known that ϕ holds for a large part of the data set.


If privacy of the data is not an issue, then the time taken for the “map” operation could be reduced by applying g instead of ƒ on data elements for which ϕ holds. Translated to the MPC setting, this would mean that, for each data element x of the data set, it is checked if ϕ holds using MPC; and if ϕ holds then g is executed on x using MPC; and otherwise ƒ is executed on x using MPC. However, this would leak information about x since, to be able to branch on ϕ(x), it would be necessary to reveal whether or not ϕ(x) is true.


There is therefore a need for an improved technique for applying a first function to each data element in a data set that addresses one or more of the above issues.


The techniques described herein provide that a function ƒ can be mapped on to a data set in the above setting, which avoids having to apply ƒ to all data elements in the data set and does not leak the value of criterion ϕ. Embodiments provide that a function ƒ can be mapped on to a data set such that ƒ needs to be executed on a data element in the data set under MPC at most N times, where N is a known upper bound on the number of data elements not satisfying ϕ. To obtain this improvement, the techniques described herein provide that g is executed on all data elements of the data set, and a “compression” operation is performed, with an output formed from the result of the compression. Although these steps introduce additional computation effort, if ƒ is complicated enough then the savings of avoiding computation of ƒ on some of the data elements in the data set outweigh these additional costs, leading to an overall performance improvement.


According to a first specific aspect, there is provided a computer-implemented method of applying a first function to each data element in a first data set, the method comprising (i) determining whether each data element in the first data set satisfies a criterion, wherein the criterion is satisfied only if the result of applying the first function to the data element is equal to the result of applying a second function to the data element; (ii) forming a compressed data set comprising the data elements in the first data set that do not satisfy the criterion; (iii) applying the first function to each data element in the compressed data set; and (iv) forming an output based on the results of step (iii); wherein steps (i)-(iv) are performed using multiparty computation, MPC, techniques.


According to a second aspect, there is provided a worker node for use in the method according to the first aspect.


According to a third aspect, there is provided a system for applying a first function to each data element in a first data set, the system comprising a plurality of worker nodes, wherein the plurality of worker nodes are configured to use multiparty computation, MPC, techniques to determine whether each data element in the first data set satisfies a criterion, wherein the criterion is satisfied only if the result of applying the first function to the data element is equal to the result of applying a second function to the data element; form a compressed data set comprising the data elements in the first data set that do not satisfy the criterion; apply the first function to each data element in the compressed data set; and form an output based on the results of applying the first function to each data element in the compressed data set.


According to a fourth aspect, there is provided a worker node configured for use in the system according to the third aspect.


According to a fifth aspect, there is provided a worker node for use in applying a first function to each data element in a first data set, wherein the worker node is configured to use one or more multiparty computation, MPC, techniques with at least one other worker node to determine whether each data element in the first data set satisfies a criterion, wherein the criterion is satisfied only if the result of applying the first function to the data element is equal to the result of applying a second function to the data element; form a compressed data set comprising the data elements in the first data set that do not satisfy the criterion; apply the first function to each data element in the compressed data set; and form an output based on the result of applying the first function to each data element in the compressed data set.


According to a sixth aspect, there is provided a computer-implemented method of operating a worker node to apply a first function to each data element in a first data set, the method comprising (i) determining whether each data element in the first data set satisfies a criterion, wherein the criterion is satisfied only if the result of applying the first function to the data element is equal to the result of applying a second function to the data element; (ii) forming a compressed data set comprising the data elements in the first data set that do not satisfy the criterion; (iii) applying the first function to each data element in the compressed data set; and (iv) forming an output based on the results of step (iii); wherein steps (i)-(iv) are performed using multiparty computation, MPC, techniques with one or more other worker nodes.


According to a seventh aspect, there is provided a computer program product comprising a computer readable medium having computer readable code embodied therein, the computer readable code being configured such that, on execution by a suitable computer or processor, the computer or processor is caused to perform the method according to the first aspect or the sixth aspect.


These and other aspects will be apparent from and elucidated with reference to the embodiment(s) described hereinafter.





BRIEF DESCRIPTION OF THE DRAWINGS

Exemplary embodiments will now be described, by way of example only, with reference to the following drawings, in which:



FIG. 1 is a block diagram of a system comprising a plurality of worker nodes according to an embodiment of the techniques described herein.



FIG. 2 is a block diagram of a worker node that can be used in embodiments of the techniques described herein;



FIG. 3 is a diagram illustrating a so-called filtered map procedure according to an embodiment of the techniques described herein;



FIG. 4 is a diagram illustrating a so-called filtered map-reduce procedure according to an embodiment of the techniques described herein;



FIG. 5 is a flow chart illustrating a method of applying a first function to each data element in a data set; and



FIG. 6 is a graph illustrating the performance of a normal map procedure versus the filtered map procedure for different values of an upper bound according to the techniques described herein.





DETAILED DESCRIPTION OF EMBODIMENTS


FIG. 1 is a block diagram of a system 1 in which the techniques and principles described herein may be implemented. The system 1 comprises a plurality of worker nodes 2, with three worker nodes 2 being shown in FIG. 1. Each worker node 2 is able to participate in multiparty computations (MPCs), with one or more of the other worker nodes 2. Multiparty computation techniques allows the computation of a joint function on sensitive (private) inputs from mutually distrusting parties without requiring those parties to disclose these inputs to a trusted third party or to each other (thus preserving the privacy of these inputs). Cryptographic protocols ensure that no participating party (or coalition of parties) learns anything from this computation except its intended part of the computation outcome. In the system shown in FIG. 1, an input for the computation can be provided by one or more worker nodes 2 and/or by one or more input nodes (not shown in FIG. 1). The output of the computation may be returned to the node that provided the input(s), e.g. one or more worker nodes 2 and/or one or more input nodes, and/or the output can be provided to one or more nodes that did not provide an input, e.g. one or more of the other worker nodes 2 and/or one or more output nodes (not shown in FIG. 1). Often, a recipient of the output of the MPC is a node that requested the computation.


The plurality of worker nodes 2 in FIG. 1 can be considered as a “committee” of worker nodes 2 that can perform an MPC. A single committee may perform the whole MPC, but in some cases multiple committees (comprising a respective plurality of worker nodes 2) can perform respective parts of the MPC.


The worker nodes 2 are interconnected and thus can exchange signalling therebetween (shown as signals 3). The worker nodes 2 may be local to each other, or one or more of the worker nodes 2 may be remote from the other worker nodes 2. In that case, the worker nodes 2 may be interconnected via one or more wireless or wired networks, including the Internet and a local area network.


Each worker node 2 can be any type of electronic device or computing device. For example a worker node 2 can be, or be part of any suitable type of electronic device or computing device, such as a server, computer, laptop, smart phone, etc. It will be appreciated that the worker nodes 2 shown in FIG. 1 do not need to be the same type of device, and for example, one or more worker nodes 2 can be servers, one or more worker nodes 2 can be a desktop computer, etc.



FIG. 2 is a block diagram of an exemplary worker node 2. The worker node 4 includes interface circuitry 4 for enabling a data connection to other devices or nodes, such as other worker nodes 2. In particular the interface circuitry 4 can enable a connection between the worker node 2 and a network, such as the Internet or a local area network, via any desirable wired or wireless communication protocol. The worker node 2 further includes a processing unit 6 for performing operations on data and for generally controlling the operation of the worker node 2. The worker node 2 further includes a memory unit 8 for storing any data required for the execution of the techniques described herein and for storing computer program code for causing the processing unit 6 to perform method steps as described in more detail below.


The processing unit 6 can be implemented in numerous ways, with software and/or hardware, to perform the various functions described herein. The processing unit 6 may comprise one or more microprocessors or digital signal processor (DSPs) that may be programmed using software or computer program code to perform the required functions and/or to control components of the processing unit 10 to effect the required functions. The processing unit 6 may be implemented as a combination of dedicated hardware to perform some functions (e.g. amplifiers, pre-amplifiers, analog-to-digital convertors (ADCs) and/or digital-to-analog convertors (DACs)) and a processor (e.g., one or more programmed microprocessors, controllers, DSPs and associated circuitry) to perform other functions. Examples of components that may be employed in various embodiments of the present disclosure include, but are not limited to, conventional microprocessors, DSPs, application specific integrated circuits (ASICs), and field-programmable gate arrays (FPGAs).


The memory unit 8 can comprise any type of non-transitory machine-readable medium, such as cache or system memory including volatile and non-volatile computer memory such as random access memory (RAM) static RAM (SRAM), dynamic RAM (DRAM), read-only memory (ROM), programmable ROM (PROM), erasable PROM (EPROM), and electrically erasable PROM (EEPROM).


If a worker node 2 stores or holds one or more data sets that can be processed in a multiparty computation, the data set(s) can be stored in the memory unit 8.


As noted above, one common operation occurring in information processing is the map operation, where the same function ƒ is applied to all data elements in a data set. However applying function ƒ can be computationally expensive, particularly where the data set is secret/private and the function ƒ has to be applied under MPC to each individual data element.


For some functions ƒ, there can be a criterion ϕ that is straightforward to check on input (data element) x such that, if it is true, ƒ(x)=g(x) where function g is straightforward to compute (e.g., it is a constant), which means that the time taken for the map operation could be reduced by applying g instead of ƒ on data elements for which ϕ holds. This can mean that, for each data element x of the data set, it is checked if ϕ holds using MPC; and if ϕ holds then g is executed on x using MPC; and otherwise ƒ is executed on x using MPC. However, this would leak information about x since, to be able to branch on ϕ(x), it would be necessary to reveal whether or not ϕ(x) is true.


Thus, techniques are required whose program flow does not depend on sensitive data to respect the sensitivity of data elements in the data set. The techniques described herein provide improvements to the application of a function ƒ to a data set that is secret or private to one or more parties, where there is criterion ϕ for function ƒ as described above, that means that function ƒ does not need to be applied to all data elements in the data set.


A first embodiment of the techniques presented herein is described with reference to FIG. 3 that illustrates a map operation on a data set 20 that comprises a plurality of data elements 22. This first embodiment is also referred to as a ‘filtered map’ operation herein. It will be appreciated that although FIG. 3 shows the data set 20 as having five data elements 22, the data set 20 may comprise less data elements 22, or typically many more than five data elements 22. The data elements 22 are numbered consecutively in FIG. 3 from #1 to #5 for ease of identification.


Firstly, for all data elements 22 in the data set 20, it is checked whether ϕ is satisfied. This check is performed using MPC techniques. That is, the check is performed by two or more worker nodes 2 using MPC techniques so that no individual worker node 2 learns the content of a data element 22 or learns whether a particular data element 22 satisfies ϕ. As noted above, ϕ is satisfied only if ƒ(x)=g(x), i.e. ϕ is satisfied only if the result of applying function ƒ to data element x is the same as the result of applying function g to data element x. An example of a check of a criterion ϕ is described below with reference to Algorithm 5.


In FIG. 3, the data elements 22 that are found to satisfy ϕ are shown in clear boxes and the data elements 22 that are found not to satisfy ϕ are shown in cross-hatched boxes. It will be appreciated that FIG. 3 shows this distinction between the data elements 22 for ease of understanding only, and no individual worker node 2 knows which data elements 22 satisfy/do not satisfy ϕ. In the example of FIG. 3, data elements #2 and #5 are found not to satisfy the criterion ϕ. Data elements #1, #3 and #4 are found to satisfy the criterion ϕ.


Given an upper bound N on the number of data elements 22 that do not satisfy ϕ, the data set 20 is compressed into a compressed data set 24 having N data elements 22 by compression operation 26. The compression operation 26 takes the data elements 22 in data set 20 that do not satisfy the criterion ϕ into a compressed data set 24, along with one or more data elements corresponding to default values 28 if the upper bound is not met (i.e. if the number of data elements 22 that do not satisfy ϕ is less than N) to make a N-size compressed data set 24. A technique for performing this compression is set out in more detail below. The default values 28 can be random data elements that are in the domain of ƒ. Alternatively, the default values 28 can be data elements 22 in the data set 20 that do satisfy the criterion ϕ. The compression operation 26 is performed using MPC techniques by two or more worker nodes 2 so that no individual worker node 2 learns the values of the data elements 22, which data elements 22 of data set 20 become part of the compressed data set 24 and which data elements in the compressed data set 24 correspond to the default value(s) 28. The worker nodes 2 that perform the compression operation 26 may be the same or different to the worker nodes 2 that perform the check of the criterion ϕ.


In the example of FIG. 3, the compression operation 26 takes data elements #2 and #5 into the compressed data set 24.



FIG. 3 also shows a copy operation 30 that copies the data set 20 to form a copied data set 32 that is identical to the data set 20. It will be appreciated that in practice this copy operation 30 may not need to be performed if the compression operation 26 copies the data elements 22 in data set 20 that do not satisfy the criterion ϕ into the compressed data set 24.


Function ƒ is applied to all elements (i.e. the data elements that do not satisfy ϕ and the one or more default values 28) of the compressed data set 24. This is shown by the map operation 34 and results in a compressed ƒ-mapped data set 36 having ƒ-mapped data elements 37. The application of the function ƒ to the elements in compressed data set 24 is performed using MPC techniques by two or more worker nodes 2 so that no individual worker node 2 learns the values of the data elements 22 in the compressed data set 24, or the result of applying the function ƒ to any data element 22 (including the default value(s) 28). The worker nodes 2 that perform the ƒ-mapping operation 34 may be the same or different to the worker nodes 2 that perform the check and/or compression operation 26.


In the example of FIG. 3, the ƒ-mapping operation 34 applies function ƒ to data elements #2 and #5.


Function g is applied to all elements 22 of the copied data set 32 (or original data set 20). This is shown by the map operation 38 and results in g-mapped data set 40 having g-mapped data elements 41. The application of the function g to the elements in copied data set 32 is performed using MPC techniques by two or more worker node 2 so that no individual worker node 2 learns the values of the data elements 22 in the data set 20/copied data set 32, or the result of applying the function g to any data element 22. The worker nodes 2 that perform the g-mapping operation 38 may be the same or different to the worker nodes 2 that perform the check, the compression operation 26 and/or the ƒ-mapping operation 34.


In the example of FIG. 3, the g-mapping operation 38 applies function g to all data elements #1 to #5.


After the mapping operations 34, 38, the compressed ƒ-mapped data set 36 is decompressed by decompression operation 42 into a ƒ-mapped data set 44 having the same size (i.e. same number of data elements) as data set 20. In particular, the ƒ-mapped data elements 37 corresponding to the data elements 22 for which the criterion ϕ was not satisfied are placed into the ƒ-mapped data set 44 in the locations corresponding to the locations of the respective data elements in the data set 20, with the relevant g-mapped data elements 41 included in the ƒ-mapped data set 44 for any data element 22 in the data set 20 for which the criterion ϕ was satisfied. Thus, the ƒ-mapped data set 44 includes the ƒ-mapped data elements 37 and some of the g-mapped data elements 41. A technique for performing this decompression is set out in more detail below. In the embodiments above where the default value(s) 28 are some of the data elements 22 in the data set 20 that do satisfy the criterion ϕ, for those data elements 22 that were used as default values 28, the decompression operation 42 can comprise taking either the ƒ-mapped versions of those data elements 22 in the compressed ƒ-mapped data set 36 into the ƒ-mapped data set 44 or the g-mapped versions of those data elements 22 in the g-mapped dataset into the ƒ-mapped data set 44 (and it will be appreciated that it does not matter which of the sets 36, 40 provides these elements as they are the same.


The decompression operation 42 is performed using MPC techniques by two or more worker nodes 2 so that no individual worker node 2 learns the values of the ƒ-mapped data elements 37, the g-mapped data elements 41, which ƒ-mapped data elements 37 decompress to which locations in the ƒ-mapped data set 44, which g-mapped data elements 41 decompress to which locations in the ƒ-mapped data set 44, or the content of the ƒ-mapped data set 44. The worker nodes 2 that perform the decompression operation 42 may be the same or different to the worker nodes 2 that perform the check, the compression operation 26, the ƒ-mapping operation 34 and/or the g-mapping operation 38.


In the example of FIG. 3, the ƒ-mapped data elements #2 and #5 are decompressed to locations in ƒ-mapped data set 44 corresponding to the locations of data elements #2 and #5 in data set 20, and g-mapped data elements #1, #3 and #4 are decompressed to locations in ƒ-mapped data set 44 corresponding to the locations of data elements #1, #3 and #4 in data set 20.


It will be noted that in the ƒ-mapped data set 44, each mapped data element was obtained either by directly computing ƒ of that data element 22 in data set 20, or by computing g of that data element 22 if ϕ was satisfied. Thus, based on the definition of criterion ϕ, the end result of the technique shown in FIG. 3 is the application of function ƒ to all data elements 22 in the original data set 20.


A second embodiment of the techniques presented herein relates to a so-called map-reduce operation on a data set. In a map-reduce operation/computation, the task is to compute ƒ(x1) ⊕ . . . ⊕ƒ(xn) where ⊕ is an associative operator (e.g. addition) and ƒ(xi) is equal to a neutral element of the associative operator (e.g. zero in the case of addition) whenever the criterion ϕ is satisfied. In this second embodiment, by comparison to the first embodiment above, a decompression operation is not necessary, and a ‘reduce’ operation can be performed directly on the compressed ƒ-mapped data set to produce the output.


The second embodiment is described below with reference to FIG. 4 which shows a data set 50 that comprises a plurality of data elements 52. It will be appreciated that although FIG. 4 shows the data set 50 as having five data elements 52, the data set 50 may comprise less data elements 52, or typically many more than five data elements 52. The data elements 52 are numbered consecutively in FIG. 4 from #1 to #5 for ease of identification.


Firstly, for all data elements 52 in the data set 50, it is checked whether ϕ is satisfied. This check is performed using MPC techniques. That is, the check is performed by two or more worker nodes 2 using MPC techniques so that no individual worker node 2 learns the content of a data element 52 or learns whether a particular data element 52 satisfies ϕ. As noted above, ϕ is satisfied only if ƒ(x)=g(x)=neutral operator for ⊕, i.e. ϕ is satisfied only if the result of applying function ƒ to data element x is a neutral operator for ⊕ (i.e. the result of applying function ƒ to data element x produces a result that does not contribute to the output of the overall map-reduce operation.


In FIG. 4, the data elements 52 that are found to satisfy ϕ are shown in clear boxes and the data elements 52 that are found not to satisfy ϕ are shown in cross-hatched boxes. It will be appreciated that FIG. 4 shows this distinction between the data elements 52 for ease of understanding only, and no individual worker node 2 knows which data elements 52 satisfy/do not satisfy ϕ. In the example of FIG. 4, data elements #2 and #5 are found not to satisfy the criterion ϕ. Data elements #1, #3 and #4 are found to satisfy the criterion ϕ.


Given an upper bound N on the number of data elements 52 that do not satisfy ϕ, the data set 50 is compressed into a compressed data set 54 having N data elements 52 by compression operation 56. The compression operation 56 takes the data elements 52 in data set 50 that do not satisfy the criterion ϕ into a compressed data set 54, along with one or more data elements corresponding to default values 58 if the upper bound is not met (i.e. if the number of data elements 52 that do not satisfy ϕ is less than N) to make a N-size compressed data set 54. In this embodiment, the default value(s) 58 are such that the result of applying function ƒ to the default value(s) is a neutral element of the associative operator ⊕. As in the first embodiment, the default value(s) can be random data elements that are in the domain of ƒ, or they can be data elements 52 in the data set 50 that do satisfy the criterion ϕ. A technique for performing this compression operation 56 is set out in more detail below. The compression operation 56 is performed using MPC techniques by two or more worker nodes 2 so that no individual worker node 2 learns the values of the data elements 52, which data elements 52 of data set 50 become part of the compressed data set 54 and which data elements in the compressed data set 54 correspond to the default value(s) 58. The worker nodes 2 that perform the compression operation 56 may be the same or different to the worker nodes 2 that perform the check of the criterion ϕ.


In the example of FIG. 4, the compression operation 56 takes data elements #2 and #5 into the compressed data set 54.


Function ƒ is applied to all elements (i.e. the data elements that do not satisfy ϕ and the one or more default values 58) of the compressed data set 54. This is shown by the map operation 60 and results in a compressed ƒ-mapped data set 62 having ƒ-mapped data elements. The application of the function ƒ to the elements in compressed data set 34 is performed using MPC techniques by two or more worker nodes 2 so that no individual worker node 2 learns the values of the data elements 52 in the compressed data set 54, or the result of applying the function ƒ to any data element 52 (including the default value(s) 58). The worker nodes 2 that perform the ƒ-mapping operation 60 may be the same or different to the worker nodes 2 that perform the check and/or compression operation 56.


In the example of FIG. 3, the ƒ-mapping operation 60 applies function ƒ to data elements #2 and #5.


After the mapping operation 60, the compressed ƒ-mapped data set 62 is reduced by reduce operation 64 using operator ⊕. That is, the ƒ-mapped data elements in ƒ-mapped data set 62 (i.e. corresponding to the data elements 52 for which the criterion ϕ was not satisfied and the ƒ-mapped data elements derived from one or more default values 58) are combined using the associative operator ⊕ to produce an output 66.


The reduce operation 64 is performed using MPC techniques by two or more worker nodes 2 so that no individual worker node 2 learns the values of the ƒ-mapped data elements, or the output 66. The worker nodes 2 that perform the reduce operation 64 may be the same or different to the worker nodes 2 that perform the check, the compression operation 56 and/or the ƒ-mapping operation 60.


In the example of FIG. 4, the ƒ-mapped data elements #2 and #5 are combined using operator ⊕ to form the output 66 (noting that the ƒ-mapped default data elements are neutral elements of the operator ⊕).


It will be noted that the output 66 is formed from the data elements 52 for which the application of function ƒ to the data element 52 provides a non-neutral element for the operator ⊕ (by the definition of criterion ϕ).


More detailed implementations of the first and second embodiments are described below with reference to a particular MPC framework. Thus, the techniques described herein provide for carrying out a “map” operation on a secret-shared data set. The data elements in the data set are vectors so that the full data set is a matrix with the elements as rows, secret-shared between a number of worker nodes 2 (so either an input node has secret-shared the data set with the worker nodes 2 beforehand, or the data set is the result of a previous multiparty computation). In the first embodiment, the result of the map operation is another secret-shared data set, given as a matrix that contains the result of applying the “map” operation on the data set; and in the second embodiment, the result is a secret-shared vector that contains the result of applying a “map-reduce” operation on the data set.


The techniques described herein can be based on any standard technique for performing multiparty computations between multiple worker nodes 2. To implement the techniques, it is necessary to be able to compute on numbers in a given ring with the primitive operations of addition and multiplication. In the following description, as is standard in the art, multiparty computation algorithms are described as normal algorithms, except that secret-shared values are between brackets, e.g., [x], and operations like [x]·[y] induce a cryptographic protocol between the worker nodes 2 implementing the given operation. Examples of such frameworks are passively secure MPC based on Shamir secret sharing or the SPDZ family of protocols, which are known to those skilled in the art.


Four higher-level operations are also useful for implementing the techniques described herein to allow to access array elements at sensitive indices. These operations are:

    • [i]←Ix(ix) returns a “secret index” [i]: a representation of array index ix as one or more secret-shared values;
    • r←IxGet([M]; [i]) returns the row in the secret-shared matrix [M] pointed to by secret index [i];
    • [M″]←IxSet([M]; [M′]; [i]) returns secret shares of matrix [M] with the row pointed to by secret index [i] replaced by the respective row from [M′];
    • [i′]←IxCondUpdate([i], [δ]) returns a secret index pointing to the same index if [δ]=0, and to the next index if [δ]=1.


Multiple ways of implementing these operations based on an existing MPC framework are known in the art and further details are not provided herein. A straightforward adaptation to matrices of the vector indexing techniques from “Design of large scale applications of secure multiparty computation: secure linear programming” by S. De Hoogh, PhD thesis, Eindhoven University of Technology, 2012 has been used, and is set out below in Algorithm 1. An alternative technique is based on adapting the secret vector indexing techniques from “Universally Verifiable Outsourcing and Application to Linear Programming” by S. de Hoogh, B. Schoenmakers, and M. Veeningen, volume 13 of Cryptology and Information Security Series, chapter 10. IOS Press, 2015.












Algorithm 1 Secret indexing of indices 1, . . . , n based on arrays

















1:
function lx(ix)

custom-character  return secret index representation of ix



2:
 return [0], . . . , [0], [1], [0], . . . , [0]
  custom-character  one at ixth location


3:
function lxGet([M]; Δ)

custom-character  return row of M ∈ custom-charactern×k indicated by Δ









4:
 return (Σu =1n u] · [Mu,v])v=1, . . . ,k









5:
function lxSet([M]; [M']; [ix])

custom-character  return M ∈ custom-charactern×k with Δth row from M'









6:
 return ([Mu,v] + [Δu] · ([Mu,v'] − [Mu,v])u=1, . . . ,n;v=1, . . . ,k









7:
function lxCondUpdate ([Δ]; [δ])

custom-character  return secret index repr. of Δ + δ, δ ∈{0, 1}









8:
 return ([Δi] + [δ] · ([Δi−1] − ([Δi])i=1, . . . ,n









Filtered map procedure—The following section relates to the filtered map procedure as shown in Algorithm 2 below, and provides a specific implementation of the first embodiment above. Algorithm 2 takes as arguments the function ƒ of the mapping, the simplified function g and predicate/criterion ϕ specifying when simplified function g can be used, an upper bound N on the number of data elements for which ϕ does not hold, and a vector z containing some default value on which ƒ can be applied (but whose results are not used).












Algorithm 2 Filtered map with complex function f, simple


function g, predicate ϕ, upper bound N, default value z
















1:
function FilteredMap(f, g, ϕ, N, z; [M])


2:
custom-character  compute vector [v] containing ones when predicate ϕ is not satisfied


3:
 for i = 1, . . . , |[M]| do [vi] ← 1 − ϕ([Mi])


4:
custom-character  compress dataset [M] to items [M′] not satisfying ϕ


5:
 for i = 1, . . . , N do [Mi′] ← z


6:
 [j] ← lx(0)


7:
 for i = 1, . . . , |[M]| do|


8:
  [ΔM'] ← ([Mu,v'] + [vi] · ([Mi,v] − [Mu,v']))u=1,...,N;v=1,...,k


9:
  [M'] = lxSet([M']; [ΔM']; [j])


10:
  [j] ← lxCondUpdate([j], [vi])


11:
custom-character  apply f to compressed dataset, g to full dataset


12:
 for i = 1, . . . , N do [Ni'] ← f([Mi'])


13:
 for i = 1, . . . , |[M]| do [Ni] ← g([Mi])


14:
custom-character  decompress results from [N'] back into [N]


15:
 [j] ← lx(0)


16:
 for i = 1, . . . , |[M]| do


17:
  [c] ← lxGet([N']; [j])


18:
  [Ni] ← ([Ni,j] + [vi] · ([cj] − [Ni,j]))j=1,...,k


19:
  [j] ← lxCondUpdate([j], [vi])


20:
 return [N]









First, a vector [v] is computed that contains a one for each row of [M] where is not satisfied, and a zero where ϕ is satisfied (line 3 of Algorithm 2).


Next, given matrix [M] and vector [v] the algorithm builds a matrix [M′] with all 1-marked rows of [M] as follows. First, each row of [M′] is initialised to [v] (line 5 of Algorithm 2). Next, [M′] is filled in by going through [M] row-by-row. By the update of secret index [j] in line 10 of Algorithm 2, whenever [vi]=1, [j] points to the row number of [M′] where the current row of [M] is supposed to go. Matrix [ΔM′] is set that is equal to [M′] if [vi] is zero, and consists of N copies of the ith row of [M] if [vi] is one (line 8 of Algorithm 2). The [j]th row of [ΔM′] is then copied to matrix [M′] (line 9 of Algorithm 2). Note that if [vi]=0 then [M′] does not change; otherwise its [j]th row is set to the ith row of [M], as was supposed to happen.


Now, function ƒ is applied to all data elements of the smaller matrix [M′] (line 12 of Algorithm 2) and function g is applied to all elements of [M] (line 13 of Algorithm 2).


Finally, the results of applying ƒ to [M′] are merged with the results of applying g to [M]. The algorithm goes through all rows of [N], where secret index [j] keeps track of which row of [N′] should be written to [N] if [vi]=1 (line 19 of Algorithm 2). The respective row is retrieved from [N′] (line 17 of Algorithm 2); and the ith row of [N] is overwritten with that row if [vi]=1 or kept as-is if [vi]=0 (line 18 of Algorithm 2).


Filtered map-reduce procedure—The following section relates to the filtered map-reduce procedure as shown in Algorithm 3 below, and provides a specific implementation of the second embodiment above. Algorithm 3 takes as arguments the function ƒ of the mapping, predicate/criterion ϕ, operator ⊕, upper bound N, and a default value z such that ƒ(z) is the neutral element of ⊕.












Algorithm 3 Filtered map-reduce with complex function f, predicate ϕ,


operator ⊕, upper bound N, default value z
















1:
function FilteredMapReduce (f, ϕ, ⊕, N, z; [M])


2:
custom-character  for each item in dataset, cheek whether predicate ϕ is satisfied


3:
 for i = 1, . . . , |[M]| do [vi] ← 1 − ϕ([Mi])


4:
custom-character  compress dataset [M] to items [M'] not satisfying ϕ


5:
 for i = 1, . . . , N do [Mi'] ← z


6:
 [j] ← lx(0)


7:
 for i = 1, . . . , |[M]| do


8:
  [ΔM'] ← ([Mu,v'] + [vi] · ([Mi,v] − [Mu,v']))u=1,...,N;v=1,...,k


9:
  [M'] = lxSet([M']; [ΔM']; [j])


10:
  [j] ← lxCondUpdate([j], [vi ≠ 0])


11:
custom-character  apply f to compressed dataset


12:
 for i = 1, . . . , N do [Ni'] ← f([Mi'])


13:
custom-character  Reduce results using ⊕


14:
 return [N1'] ⊕ . . . ⊕ [NN']









The first steps of Algorithm 3, to check ϕ and obtain a compressed matrix [M′] (lines 2-10 of Algorithm 3), are the same as Algorithm 2 above. In this case, function ƒ is applied to [M′] (line 12 of Algorithm 3) but there is no need to apply g to [M]. Instead, the result is reduced with ⊕ and the result returned (line 14 of Algorithm 3).


Some extensions to the above embodiments and algorithms are set out below:


Obtaining upper bounds—The algorithms above assume that an upper bound N is available on the number of data elements in the data set not satisfying the predicate ϕ. In some situations, such an upper bound may already be available and predefined. For example, in a case study presented below, the map operation is combined with the disclosure of an aggregated version of the data set, from which an upper bound can be determined. In other situations, an upper bound may not be available but revealing it may not be considered a privacy problem. In this case, after determining the vector [v], its sum Σ[vi] can be opened up by the worker nodes 2 and used as a value for N. As an alternative, the sum can be rounded or perturbed so as not reveal its exact value. In yet other situations, a likely upper bound may be available but it may be violated. In such a case, Σ[vi] can be computed and compared to the supposed upper bound, only leaking the result of that comparison.


Executing g only on mapped items—In the first embodiment above, g is executed on all data elements 22 in the data set 20, whereas the results of applying g are only used for data elements 22 where ϕ is satisfied. If, apart from an upper bound N on the number of data elements not satisfying ϕ, there is also a lower bound on the number of data elements not satisfying ϕ (i.e., an upper bound on the number of items satisfying ϕ), then it is possible to compute g just on those items at the expense of making the compression/decompression operations 26, 42 more computationally expensive. In cases where g is relatively complex, this can approach can reduce the overall computational burden relative to computing g of every data element 22 in the data set 20.


Block-wise application—For large data sets, instead of applying the above embodiments/algorithms to the whole data set, it may be more efficient to divide the data set into smaller blocks of data elements and apply the map operation to these smaller blocks. This is because the indexing functions used in the compression and decompression operations described above typically scale linearly in both the size of the non-compressed and compressed data sets. However, dividing the original data set into smaller blocks requires upper bounds for each individual block to be known, as opposed to one overall upper bound N. This decreases privacy insofar as these upper bounds are not already known for other reasons. In this sense, providing block-wise processing allows a trade-off between speed and privacy (where a block size of 1 represents the previously-mentioned alternative to reveal predicate ϕ for each item in the data set).


Flexible application—While the techniques according to the first embodiment described above avoid unnecessary executions of ƒ, they do so at the expense of additional computations of g, checking ϕ, and performing the compression and decompression operations. Hence, if the upper bound N is not small enough, then the techniques described above do not save time. For instance, in the case study described below, the algorithm only saves time if at most five out of ten data elements do not satisfy ϕ. If the execution times of the various computations are known, then based on the upper bound N a flexible decision can be made as to whether to perform a traditional mapping operation (i.e. applying ƒ to each data element) or a filtered mapping operation. If these execution times are not known beforehand, they can be measured as the computation progresses. In addition, if the upper bound N is zero, then the compression/decompression procedures can be skipped.


The flow chart in FIG. 5 shows a method of applying a first function to each data element in a first data set according to the techniques described herein. The method steps in FIG. 5 are described below in terms of the operations performed in a system 1 by a plurality of worker nodes 2 to apply the first function to data elements in the data set, with each step being performed by two or more worker nodes 2 as a multiparty computation. However, it will be appreciated that each step as illustrated and described below can also be understood as referring to the operations of an individual worker node 2 in the multiparty computation.


In addition, it will be appreciated that any particular worker node 2 in the system 1 may participate in or perform any one or more of the steps shown in FIG. 5. Thus, a particular worker node 2 may only participate in or perform one of the steps in FIG. 5, or a particular worker node 2 may participate in or perform any two or more (consecutive or non-consecutive) steps in FIG. 5, or a particular worker node 2 may participate in or perform all of the steps shown in FIG. 5.


At the start of the method, there is a data set, referred to as a first data set, that comprises a plurality of data elements. The data set can be provided to the system 1 by an input node as a private/secret input, or the data set can belong to one of the worker nodes 2 that is to participate in the method and the worker node 2 can provide the data set as an input to the method and the other worker nodes 2 as a private/secret input. In the method, a function ƒ, referred to as a first function, is to be applied to each of the data elements in the data set. For the method to be effective in improving the performance of the mapping of the first function on to the first data set, the first function should be relatively computationally expensive to compute as part of a multiparty computation, there should be a criterion that is easy to check for any particular data element such that, if true, the result of applying the first function to the data element is equal to the result of applying a second function to the data element (where the second function is relatively computationally easy to compute as part of a MPC), and the criterion should hold for a large part of the data set.


In a first step, step 101, it is determined whether each data element in the first data set satisfies the criterion. This check is performed as a MPC by a plurality of worker nodes 2. As noted above, the criterion is satisfied for a particular data element only if (or if and only if) the result of applying the first function to the data element is equal to the result of applying the second function to the data element.


In some embodiments, it can be determined whether the number of data elements in the first data set that do not satisfy the criterion exceeds a first threshold value (also referred to herein as an upper bound). If the number of data elements in the first data set that do not satisfy the criterion does not exceed the first threshold value, then the method can proceed to the next steps in the method and the mapping operation can continue. However, if the number of data elements in the first data set that do not satisfy the criterion does exceed the first threshold value, then the mapping operation can proceed in a conventional way (e.g. by applying the first function to each data element in the data set as part of a MPC), or the method can be stopped. The first threshold value can be set to a value that enables the method of FIG. 5 to provide useful performance gains over the conventional approach of applying the first function to all of the data elements in the data set.


Next, in step 103, a compressed data set is formed that comprises the data elements in the first data set that do not satisfy the criterion. This compression is performed as a MPC by a plurality of worker nodes 2. Thus, the data elements for which the result of applying the first function to the data element is different to the result of applying the second function to the data element are compressed into the compressed data set.


In some embodiments, in addition to the data elements in the first data set that do not satisfy the criterion, one or more data elements corresponding to a default value are included in the compressed data set. In particular, if the number of data elements that do not satisfy the criterion is less than the upper bound (first threshold value), one or more data elements corresponding to the default value can be included in the compressed data set to bring the total number of data elements in the compressed data set up to the upper bound.


In some embodiments, the first threshold value may be determined as described above, and can be determined prior to step 101 being performed, but in other embodiments the first threshold value can be determined based on the total number of data elements in the first data set that do not satisfy the criterion. In this case, to avoid revealing the exact number of data elements in the first data set that do not satisfy the criterion to the worker nodes 2, the total number can be rounded or perturbed in order to generate the first threshold value.


Next, after the compressed data set has been formed, in step 105 the first function is applied to each data element in the compressed data set. This mapping step is performed as a MPC by a plurality of worker nodes 2. In embodiments where the compressed data set includes one or more default values, step 105 comprises applying the first function to each of the data elements in the first data set that do not satisfy the criterion and to each of the one or more data elements corresponding to the default value. It will be appreciated that the worker nodes 2 performing the computation in this step are not aware of which data elements are data elements from the first data set and which data elements are default values.


Finally, in step 107, an output of the mapping is formed based on the results of applying the first function to the data elements in the compressed data set. Again, forming the output is performed as a MPC by a plurality of worker nodes 2.


In some embodiments (corresponding to the filtered map embodiments described above), the output of the method is to be a second data set where each data element of the second data set corresponds to the result of applying the first function to the respective data element in the first data set. Therefore, in some embodiments, the method can further comprise the step of applying the second function to each data element in the first data set using MPC techniques, and the output can be formed in step 107 from the results of step 105 and the results of applying the second function to each data element in the first data set.


Alternatively, in some embodiments the method can further comprise the step of applying the second function to each data element in the first data set that does satisfy the criterion using MPC techniques, and the output can be formed in step 107 from the results of step 105 and the results of applying the second function to the data elements in the first data set that do satisfy the criterion. To implement this step, a second compression step can be performed which compresses the data elements that do satisfy the criterion into a second compressed data set, and the second function can be applied to the second compressed data set. The second compressed data set can include one or more data elements corresponding to one or more default values as described above for the compressed data set formed in step 103. In these embodiments, there can be a second threshold value, and the second compressed data set may only be formed if it is determined that the number of data elements in the first data set that do satisfy the criterion does not exceed the second threshold value.


In either embodiment above, the second data set can be formed so that it comprises data elements corresponding to the results of applying the first function to the data elements in the compressed data set that were in the first data set and that did not satisfy the criterion, and data elements corresponding to the result of applying the second function to the data elements in the first data set for which the criterion was satisfied. Thus, the second data set can have the same number of data elements as the first data set.


In some embodiments, corresponding to the filtered map-reduce embodiments above, the output of the method in step 107 is a combination of the results of applying the first function to the data elements in the compressed data set that were in the first data set and that did not satisfy the criterion. In particular, the combination of the results can be formed using an associative operator (e.g. addition), where the criterion being satisfied by a data element in the first data set means that the result of applying the first function or the second function to the data element is a neutral element for the associative operator (e.g. zero).


As noted above, any worker node 2 in the system 1 may perform any one or more of the steps shown in FIG. 5 or as described above as part of a MPC with one or more other worker nodes 2. As such, a particular worker node 2 may perform, or be configured or adapted to perform, any one or more of steps 101, 103, 105, 107 and the steps described above.


Exemplary implementation and evaluation of the filtered map-reduce technique—This section presents a case study that shows how the above techniques improve the performance of a map operation (specifically, a map-reduce operation). The case study relates to a Kaplan-Meier survival analysis.


The Kaplan-Meier estimator is an estimation of the survival function (i.e., the probability that a patient survives beyond a specified time) based on lifetime data. The estimated probability pi at a given time i is given as pij≤i(nj−dj)/nj, where nj is the number of patients still in the study just before time j and dj is the number of deaths at time j; the product is over all time points where a death occurred (although it should be noted that nj decreases not just by deaths but also by people dropping out of the study for other reasons).


A simple statistical test to decide if two Kaplan-Meier estimates are statistically different is the so-called Mantel-Haenzel logrank test. For instance, this is the test performed by R's survdiff call (this is the “survdiff” command of the R software environment for statistical computing and graphics (www.r-project.org). Given values nj,i, nj,2, dj,1, dj,2 at each time point t, define:








E

j
,
1


=



(


d

j
,
1


+

d

j
,
2



)

·

n

j
,
1





n

j
,
1


+

n

j
,
2





;








V
j

=



n

j
,
1





n

j
,
2




(


d

j
,
1


+

d

j
,
2



)




(


n

j
,
1


+

n

j
,
2


-

d

j
,
1


-

d

j
,
2



)





(


n

j
,
1


+

n

j
,
2



)

2

·

(


n

j
,
1


+

n

j
,
2


-
1





;






X
=







j



E

j
,
1



-



j



d

j
,
1




)




j



V
j



.





The null hypothesis, i.e., the hypothesis that the two curves represent the same underlying survival function, corresponds to X≈X12. This null hypothesis is rejected (i.e., the curves are different) if 1−cdf(X)>α, where cdf is the cumulative density function of the X12 distribution and, e.g., α=0.05.


It should be noted that the computation of this statistical test can be performed using a map-reduce operation. Namely, each tuple (nj,1, nj,2, dj,1, dj,2) can be mapped to (Ej,1, Vi, dj,1) and these values are reduced using point-wise summation to obtain (ΣEj,1, ΣVi, Σdj,1); and these values are used to compute X. Moreover, it should be noted that, under the easy to establish criterion ϕ:=(dj,1, dj,2)=(0, 0), it is given that (Ej,1, Vi, dj,1)=(0; 0; 0) (the neutral element under point-wise summation), so the conditions under which the filtered map-reduce can be applied are satisfied. As default value, z=nj,1, nj,2, dj,1, dj,2)=(1, 1, 0, 0) can be used.


Anonymized Survival Graph and Upper Bounds

In the case of Kaplan-Meier, the values nj and dj at each time are non-anonymised data. This data can be anonymised by merging different time points. In particular, a block of N consecutive time points (nidi)i=1, . . . N are anonymised to one time point (n; d) with n=n1, d=Σdi.


This anonymised survival data enables an upper bound N to be established on the number of time points for which the above ϕ does not hold. Namely, given anonymised time points (n, d), (n′; d′), the number of points in the block corresponding to (n; d) is at most n−n′: the number of people that dropped out during that time interval. Hence, each block has an upper bound, enabling block-wise application of the map-reduce algorithm as discussed above.


The details of performing the statistical test on Kaplan-Meier survival data are now presented. Apart from the basic MPC framework discussed above, it is also assumed that the following are available:

    • a function [c]←Div([a], [b], L) that, given secret-shared [a] and [b], returns secret-shared [c] such that a/b≈c·2−L, i.e., c is a fixed-point representation of a=b with L bits precision. Such an algorithm can be obtained by adaptation of division algorithms from “Design of large scale applications of secure multiparty computation: secure linear programming” referenced above, or “High-performance secure multi-party computation for data mining applications” by D. Bogdanov, M. Niitsoo, T. Toft and J. Willemson, Int. J. Inf. Secur., 11(6):403-418, November 2012. By convention, [a]1, [a]2 are secret-shares representing a fixed-point value with precision BITS_1 and BITS_2 respectively defined by the application.
    • An operator [b]←[a]>>L that shifts secret-shared value a to the right by L bits, as also found in the above two references.
    • A function [ƒl]←eqz([x]) that sets ƒl=1 if x=0, and ƒl=0 otherwise. A protocol to implement this is described in the first reference above.


Given these primitives, the row-wise operation for the Kaplan-Meier test can be implemented, i.e., the function ƒ for the map-reduce operation, as shown in Algorithm 4 below. The algorithm to evaluate ϕ, i.e. the function that computes which rows do not contribute to the test, is shown in Algorithm 5 below.












Algorithm 4 Log test inner loop















Require: [di,1], [di,2], [ni,1], [ni,2] survival data at time point i


Ensure: ([ei]1, [vi]1, [di]) contributions to Σj Ej,1, Σj Vj, Σj dj,1 for test


statistic X








1:
function f([di,1], [di,2], [ni,1], [ni,2])


2:
 [ac] ← [di,1] + [di,2]


3:
 [bd] ← [ni,1] + [ni,2]


4:
 [frc]1 ← Div([ac]; [bd]; BITS_1)


5:
 [ei]1 ← [frc]1 · [ni,1]


6:
 [vn] ← [ni,1] · [ni,2] · [ac] · ([bd] − [ac])


7:
 [vd] ← [bd] · [bd] · ([bd] − 1)


8:
 [vi]1 ← Div([vn]; [vd]; BITS_1)


9:
 return ([ei]1, [vi]1, [di])



















Algorithm 5 Log test criterion















Require: [di,1], [di,2], [ni,1], [ni,2] survival data at time point i


Ensure:[fl] = 1 if time point does not provide contribution to test statistic








1:
function ϕ([di,1], [di,2], [ni,1], [ni,2])


2:
 [fl] ← eqz([di,1] + [di,2])


3:
 return [fl]









The overall algorithm for performing the logrank test is shown in Algorithm 6 below.












Algorithm 6 Logrank test on survival curves, using filtered map-reduce















Require: [n], [d] t-by-2 matrices of lifetime data for two populations; S step size


Ensure: N, D anonymized data, p p-value for hypothesis of same survival function









1:
for j ← 1, . . . , ┌t/S┐ + 1 do

custom-character  generate annoymized data



2:
 b ← (j − 1)S + 1; e ← jS









3:
custom-character  by convention, [n] is extended with copies of its last row and [d] by zeros









4:
 Nj,1 = Open([nb,1]) Nj,2 = Open([nb,2])









5:
 Dj,1 = Open(Σi=be[di,1]); Dj,2 = Open(Σi=be[di,2])









6:
for j ← 1, . . . , ┌t/S┐ do

custom-character  compute conributions for each block









7:
 b ← (j − 1)S + 1; e ← jS; N ← Nj,1 + Nj,2 − Nj+1,1 − Nj+1,2


8:
 [M] ← {[di,1], [di,2], [ni,1], [ni,2]}i=b,...,e


9:
 ([ej]1, [vj]1, [dj']) FilteredMapReduce(f, ϕ, +, N, (0, 0, 1, 1); [M])


10:
[dtot] ← Σj=1┌t/S┐ [dj']; [dtot]1 ← [dtot] << BITS_1


11:
[etot]1 ← Σj=1┌t/S┐ [ej]1; [vtot]1 ← Σj=1┌t/S┐ [vj]1









12:
[dmi]1 ← [dtot]1 − [vtot]1



13:
[chi0]2 ← Div([dmi]1; [vtot]1; BITS_2)



14:
[chi]12 ← [chi0]2 · [dmi]1



15:
[chi]1 ← [chi]12 >> BITS_2



16:
chi1 ← Open([chi]1)



17:
p ← 1 − cdfχ12(chi1 · 2−BITS_1)



18:
return N, D, p









First, as discussed above, anonymised survival data (lines 1-5 of Algorithm 6) is computed. For each S-sized block the number of participants from the first time point are taken (line 4 of Algorithm 6) and the sum of deaths from all time points (line 5 of Algorithm 6). Then, for each block, the upper bound on the number of events is computed (line 7 of Algorithm 6) and the FilteredMapReduce function is applied to obtain the contributions of those time points to the overall test statistic (line 9 of Algorithm 6). This information is summed together, and from that the test statistic is computed (lines 10-17 of Algorithm 6).


A prototype implementation of the above system has been constructed. The multiparty computation framework has been instantiated using FRESCO (the Framework for


Efficient Secure Computation, found at https://github.com/aicis/fresco) using the FRESCO SPDZ back-end for two parties. This framework provides the MPC functionality required for the techniques described herein, as discussed above. Concerning the additional MPC functionality required for Kaplan-Meier as discussed above, the division protocol from “High-performance secure multi-party computation for data mining applications” is adapted to perform right-shifts after every iteration so that it works for smaller moduli; for right-shifting and zero testing the protocols provided by FRESCO are used. Constants BITS_1=23, BITS_1=30 were used.


As a performance metric, an estimate of the pre-processing time required for the computation is used. The SPDZ protocol used, while performing a computation, consumes certain pre-processed data (in particular, so-called multiplication triples and pre-shared random bits) that need to be generated prior to performing the computation. With state-of-the-art tools, the effort for pre-processing is one or more orders of magnitude more than the effort for the computation itself, therefore, pre-processing effort is a realistic measure of overall effort. To estimate pre-processing time, the amount of pre-processed data needed during the computation is tracked; and this is multiplied with the cost per pre-processed item, which is obtained by simulating both pre-processing parties in one virtual machine on a conventional laptop.


The graph in FIG. 6 illustrates the performance of a normal map procedure versus the filtered map procedure for the Kaplan-Meier case study for different values of an upper bound N according to the techniques described herein. Assuming there is a list of ten time points for which an upper bound is available (for instance, because anonymised data is released in groups of ten time points), the graph compares the normal map operation (shown in the top row) with the filtered map according to the techniques described herein for various values of the upper bound N. As can be seen, applying the filter (i.e. determining which rows do not contribute to the final result using ϕ) takes virtually no time (shown by the first section of each row). Each row then shows, from left to right, the time needed for the compression operation, the time needed for mapping ƒ on to the compressed list, and the time need for the decompression operation, and it can be seen that the time taken for each operation increases linearly with increases in the upper bound.


When the upper bound N is 6 (the bottom row in FIG. 6), the overhead of compressing and decompressing becomes so large that it is faster to perform a direct mapping of ƒ on to all of the data elements. If decompression is not needed, the compression and mapping of ƒ on to the compressed data set is still cheaper (quicker) for an upper bound N=6 but not for upper bound N=7 (not shown in FIG. 6). The overall result on the overall Kaplan-Meier computation on a representative data set is a time decrease of 51% on the map-reduce operation, which is the bulk of the overall computation.


There is therefore provided improved techniques for applying a first function to each data element in a data set that addresses one or more of the issues with conventional techniques. Generally, the need for multiparty computation arises in many circumstances, for example where multiple mutually distrusting parties want to enable joint analysis on their data sets. Applying a map operation on a list of data elements is a general concept that occurs in many analytics algorithms. The techniques described herein are to be used with data sets for which there is a large number of “trivial” data elements for which the map operation is easy (i.e. where ϕ is satisfied). The Kaplan-Meier Statistical Test is one such example, but those skilled in the art will be aware of other data sets/tests that the techniques described herein can be applied to.


Variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the principles and techniques described herein, from a study of the drawings, the disclosure and the appended claims. In the claims, the word “comprising” does not exclude other elements or steps, and the indefinite article “a” or “an” does not exclude a plurality. A single processor or other unit may fulfil the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage. A computer program may be stored or distributed on a suitable medium, such as an optical storage medium or a solid-state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems. Any reference signs in the claims should not be construed as limiting the scope.

Claims
  • 1. A computer-implemented method of applying a first function to each data element in a first data set, the method comprising: (i) determining whether each data element in the first data set satisfies a criterion, wherein the criterion is satisfied only if the result of applying the first function to the data element is equal to the result of applying a second function to the data element;(ii) forming a compressed data set comprising the data elements in the first data set that do not satisfy the criterion;(iii) applying the first function to each data element in the compressed data set; and(iv) forming an output based on the results of step (iii);wherein steps (i)-(iv) are performed using multiparty computation, MPC, techniques.
  • 2. A computer-implemented method as claimed in claim 1, wherein the criterion is satisfied if and only if the result of applying the first function to the data element is equal to the result of applying the second function to the data element.
  • 3. A computer-implemented method as claimed in claim 1[[ or 2]], wherein the step of forming the compressed data set comprises forming the compressed data set from the data elements in the first data set that do not satisfy the criterion and one or more data elements corresponding to a default value.
  • 4. A computer-implemented method as claimed in claim 3, wherein the step of applying the first function to each data element in the compressed data set comprises applying the first function to each of the data elements in the first data set that do not satisfy the criterion and applying the first function to each of the one or more data elements corresponding to the default value.
  • 5. A computer-implemented method as claimed in claim 1, wherein the method further comprises the step of: determining whether the number of data elements in the first data set that do not satisfy the criterion exceeds a first threshold value; andperforming steps (ii), (iii) and (iv) if the number of data elements in the first data set that do not satisfy the criterion does not exceed the first threshold value.
  • 6. A computer-implemented method as claimed in claim 1, wherein the method further comprises the step of: determining a first threshold value based on the number of data elements in the first data set that do not satisfy the criterion.
  • 7. A computer-implemented method as claimed in claim 5, wherein the compressed data set comprises a total number of data elements equal to the first threshold value.
  • 8. (canceled)
  • 9. A computer-implemented method as claimed in claim 1 wherein the method further comprises the step of: (v) applying the second function to each data element in the first data set using MPC techniques; andwherein the step of forming an output comprises forming an output based on the results of steps (iii) and (v) wherein the step of forming an output based on the results of steps (iii) and (v) comprises:forming a second data set from the results of step (iii) and step (v), wherein the second data set comprises data elements corresponding to the results of applying the first function to the data elements in the compressed data set that were in the first data set and that did not satisfy the criterion, and data elements corresponding to the result of applying the second function to the data elements in the first data set for which the criterion was satisfied.
  • 10. A computer-implemented method as claimed in claim 9, wherein the second data set has the same number of data elements as the first data set.
  • 11. A computer-implemented method as claimed in claim 9, wherein the method further comprises: applying the second function to each data element in the first data set that does satisfy the criterion using MPC techniques if the number of data elements in the first data set that do satisfy the criterion does not exceed a second threshold value.
  • 12. A computer-implemented method as claimed in claim 1, wherein the step of forming an output based on the results of step (iii) comprises: forming the output by using an associative operator to combine the results of applying the first function to the data elements in the compressed data set that were in the first data set and that did not satisfy the criterion.
  • 13. A computer-implemented method as claimed in claim 12, wherein the criterion is such that the criterion is satisfied only if the result of applying the first function to the data element and the result of applying the second function to the data element is a neutral element for the associative operator.
  • 14-15. (canceled)
  • 16. A system for applying a first function to each data element in a first data set, the system comprising: a plurality of worker nodes, wherein the plurality of worker nodes are configured to use multiparty computation, MPC, techniques to:determine whether each data element in the first data set satisfies a criterion, wherein the criterion is satisfied only if the result of applying the first function to the data element is equal to the result of applying a second function to the data element;form a compressed data set comprising the data elements in the first data set that do not satisfy the criterion;apply the first function to each data element in the compressed data set; andform an output based on the results of applying the first function to each data element in the compressed data set.
  • 17. A system as claimed in claim 16, wherein the criterion is satisfied if and only if the result of applying the first function to the data element is equal to the result of applying the second function to the data element.
  • 18. A system as claimed in claim 16, wherein the plurality of worker nodes are configured to form the compressed data set from the data elements in the first data set that do not satisfy the criterion and one or more data elements corresponding to a default value.
  • 19. A system as claimed in claim 18, wherein the plurality of worker nodes are configured to apply the first function to each of the data elements in the first data set that do not satisfy the criterion and apply the first function to each of the one or more data elements corresponding to the default value.
  • 20. A system as claimed in claim 16, wherein the plurality of worker nodes are further configured to: determine whether the number of data elements in the first data set that do not satisfy the criterion exceeds a first threshold value; andperform the forming a compressed data set, applying the first function and forming an output if the number of data elements in the first data set that do not satisfy the criterion does not exceed the first threshold value.
  • 21. A system as claimed in claim 16, wherein the plurality of worker nodes are configured to: determine a first threshold value based on the number of data elements in the first data set that do not satisfy the criterion.
  • 22. A system as claimed in claim 20, wherein the compressed data set comprises a total number of data elements equal to the first threshold value.
  • 23. A system as claimed in claim 16, wherein the plurality of worker nodes are further configured to: apply the second function to each data element in the first data set using MPC techniques; andwherein the plurality of worker nodes are configured to forming an output based on the results of applying the first function and applying the second function.
  • 24. A system as claimed in claim 23, wherein the plurality of worker nodes are configured to form an output based on the results of applying the first function and applying the second function by: forming a second data set from the results of applying the first function and applying the second function, wherein the second data set comprises data elements corresponding to the results of applying the first function to the data elements in the compressed data set that were in the first data set and that did not satisfy the criterion, and data elements corresponding to the result of applying the second function to the data elements in the first data set for which the criterion was satisfied.
  • 25. A system as claimed in claim 24, wherein the second data set has the same number of data elements as the first data set.
  • 26. A system as claimed in any of claims 23, wherein the plurality of worker nodes are configured to: apply the second function to each data element in the first data set that does satisfy the criterion using MPC techniques if the number of data elements in the first data set that do satisfy the criterion does not exceed a second threshold value.
  • 27. A system as claimed in claim 16, wherein the plurality of worker nodes are configured to form an output based on the results of comprises applying the first function by: forming the output by using an associative operator to combine the results of applying the first function to the data elements in the compressed data set that were in the first data set and that did not satisfy the criterion.
  • 28. A system as claimed in claim 27, wherein the criterion is such that the criterion is satisfied only if the result of applying the first function to the data element and the result of applying the second function to the data element is a neutral element for the associative operator.
  • 29. (canceled)
  • 30. A worker node for use in applying a first function to each data element in a first data set, wherein the worker node is configured to use one or more multiparty computation, MPC, techniques with at least one other worker node to: determine whether each data element in the first data set satisfies a criterion, wherein the criterion is satisfied only if the result of applying the first function to the data element is equal to the result of applying a second function to the data element;form a compressed data set comprising the data elements in the first data set that do not satisfy the criterion;apply the first function to each data element in the compressed data set; andform an output based on the result of applying the first function to each data element in the compressed data set.
  • 31. A worker node as claimed in claim 30, wherein the criterion is satisfied if and only if the result of applying the first function to the data element is equal to the result of applying the second function to the data element.
  • 32. A worker node as claimed in claim 30, wherein the worker node is configured to form the compressed data set from the data elements in the first data set that do not satisfy the criterion and one or more data elements corresponding to a default value.
  • 33. A worker node as claimed in claim 32, wherein the worker node is configured to apply the first function to each of the data elements in the first data set that do not satisfy the criterion and apply the first function to each of the one or more data elements corresponding to the default value.
  • 34-57. (canceled)
PCT Information
Filing Document Filing Date Country Kind
PCT/EP2018/084654 12/13/2018 WO 00
Provisional Applications (1)
Number Date Country
62609450 Dec 2017 US