Systems and methods for efficient fixed-base multi-precision exponentiation

Information

  • Patent Grant
  • 10693627
  • Patent Number
    10,693,627
  • Date Filed
    Friday, January 19, 2018
    6 years ago
  • Date Issued
    Tuesday, June 23, 2020
    4 years ago
  • Inventors
  • Original Assignees
  • Examiners
    • Sandifer; Matthew D
    Agents
    • Carr & Ferrell LLP
Abstract
Systems and methods for efficient fixed-base multi-precision exponentiation are disclosed herein. An example method includes applying a multi-precision exponentiation algorithm to a base number, the multi-precision exponentiation algorithm comprises a pre-generated lookup table used to perform calculations on the base number, the pre-generated lookup table comprising pre-calculated exponentiated values of the base number.
Description
FIELD OF INVENTION

The present disclosure is directed to the technical field of systems that utilize computational algorithms as applied to encryption methods, and specifically homomorphic encryption as disclosed in the related applications above, and computing systems. More particularly, the present disclosure related to the technical field of multi-precision arithmetic algorithms.


SUMMARY

According to some embodiments, the present disclosure is directed to a method comprising: applying a multi-precision exponentiation algorithm to a base number, the multi-precision exponentiation algorithm comprising a pre-generated lookup table used to perform calculations on the base number, the pre-generated lookup table comprising pre-calculated exponentiated values of the base number. This method includes returning one or more calculated values for the base number, as well.


According to some embodiments, the present disclosure is directed to a system comprising: a processor; and a memory for storing executable instructions, the processing executing the instructions to: apply a multi-precision exponentiation algorithm to a large base number, the multi-precision exponentiation algorithm comprises a pre-generated lookup table used to perform calculations on the base number, the pre-generated lookup table comprising pre-calculated exponentiated values of the base number; and return a calculated value for the base number.





BRIEF DESCRIPTION OF THE DRAWINGS

Certain embodiments of the present technology are illustrated by the accompanying figures. It will be understood that the figures are not necessarily to scale and that details not necessary for an understanding of the technology or that render other details difficult to perceive may be omitted. It will be understood that the technology is not necessarily limited to the particular embodiments illustrated herein.



FIG. 1 is a flowchart of an example method for utilizing a multi-precision exponentiation algorithm to perform exponentiation of a large base number (e.g., above 64 bits).



FIG. 2 is a flowchart of an example method for determining if a time required for using the multi-precision exponentiation algorithm is less than a time required to perform the calculations of the base number directly using exponents.



FIG. 3 illustrates an exemplary computing system that may be used to implement embodiments according to the present technology.





DETAILED DESCRIPTION

Generally speaking, the present disclosure includes systems and methods that provide efficient fixed-base multi-precision exponentiation calculations in order to perform compute operations on numbers that are larger than 64 bits. These processes are generally referred to as multi-precision processing.


For context, many computational algorithms, especially encryption algorithms such as Paillier, RSA, or ElGamal, involve performing arithmetic operations on very large numbers. These numbers commonly require several thousand bits to represent on a computer. Modern general-purpose computer hardware can only perform arithmetic operations directly on numbers that can be represented in 32 bits (called single precision numbers) or 64 bits (called double precision). To perform operations on larger numbers, special-purpose algorithms known as multi-precision arithmetic (also known as arbitrary-precision arithmetic) algorithms must be used.


The systems and methods disclosed herein implement algorithms for efficient multi-precision exponentiation in cases where the same base value is raised to many different exponents. The systems and methods perform a pre-computation step that generates a lookup table containing at least one base value which is raised to many different exponents. In some embodiments, many lookup tables is pre-generated and stored for later use.


Then, each of the desired exponentiations is performed by multiplying together different elements of the lookup table, which reduces the amount of computation required for each exponent. If there are enough exponents to compute, the time saved in computing each exponent provides benefits that outweigh lookup table construction time. Thus, these systems and methods improve the performance of the underlying computing system by allowing for faster and more efficient multi-precision computation. Moreover, these methods disclosed herein enable a computing system to perform compute operations that are impossible for computing devices enabled for performing computing operations on numbers having 32 or 64 bits. These computing systems that are only configured to perform single and double precision computations can benefit by being able to perform arbitrary-precision/multi-precision computations using methods disclosed herein. In sum, the problem being solved by these systems and methods is a computer-centric or technological problem and the solutions disclosed herein improve the performance of the computer.


The following method is performed using a specifically configured computing system. For example, the computer system of FIG. 3 is specifically configured to perform the methods (e.g., multi-precision exponentiation algorithms) described herein.


A specifically configured computer system of the present disclosure is configured to exponentiate a large number. For example, the system exponentiates a base number B which is a large number requiring at least 128 bits to be represented on a system. The base number B is exponentiated with a set of exponential value numbers E={E1, E2, . . . , En}. These exponent values are of any size. The following algorithm efficiently computes the exponential values of B across E:

V={V1,V2, . . . ,Vn} where Vi=BEi for all i.


In other words, the system will utilize the equation above, taking a desired base number B and a set of n desired exponents E, and efficiently computes B raised to each of the exponents.


A lookup table for the base number B is created by the system using the following process. In some embodiments, the system is configured to perform a computation where Emax is a largest value exponent in the desired set of exponents E. Also, Y is a number of bits required to represent the largest value exponent Emax. The system applies an exponent size parameter d, where 0<d≤Y be (larger values of d require more memory but leads to larger computational speedups, as described below).


Also, R=[Y/d] is a number of d-bit “windows” required to represent any value in the desired set of exponents E. The system also applies a number of possible values in a d-bit window, represented as C=2d.


Using the construct above, the system will generate a lookup table T, with R rows and C columns. The following descriptions will referrer to an element in row i and column j of T as Ti,j, where 0≤i<R and 0≤j<C.


The system begins by setting the variable Ti,0=1 for all i. Then, the system sets the variable T0,1=B. The system will then compute a remainder of row 0 as follows: for each i∈{2k+1|0<k and 2k+1<C} the system will first determine if i>1. If this is true the system will calculate T0,i=T0,1×T0,i-1. Next, for each j∈{2i, 4i, . . . , ki|ki<C} the system will calculate T0,j=T20,j/2. Stated otherwise, the system takes each odd index less than C, computes a value at that index by multiplying T0,1 with the value at the previous index (which is equivalent to adding 1 to the exponent at the previous index), and then continually doubles the index and squares the value at that index (which is equivalent to doubling the exponent) until the system determines that an end of the row 0 has been reached. In this way, the system will compute an appropriate exponentiated or raised value for each index in row 0.


Next, the system will compute subsequent rows one at a time as follows: For each r∈{1, 2, . . . , R−1}, the system sets Tr,1=T2r-1,c/2. The system then repeats the process above using Tr,1 in place of T0,1 to compute remaining values for the row.


The result of this process is that row r of the lookup table will contain {B2rd×1, B2rd×2, . . . , B2rd×(2d-1)}. In other words, the r-th row will contain all possible values for the r-th d-bit “window” of an exponent.


The system now computes V as follows. For each i∈{1, . . . n}, the system obtains a binary representation of Ei and segments this binary representation into R blocks of d bits each (if there are fewer than R×d bits in the exponent, the system will append 0's in high-order positions until there are R×d bits). The system will set Eir as the r-th such block. Then, the system computes







V
i

=




r
=
0


R
-
1





T

r
,

E

r
i




.







This step requires at most R multi-precision multiplication operations to compute each exponent.


In encryption algorithms, multi-precision exponentiation is usually applied relative to some modulus M. For example a user is interested in computing the following data: V={BE1 mod M, BE2 mod M, . . . }. In this case, the system applies Montgomery multiplication, which is a method for speeding up successive modular multiplication operations over multi-precision numbers. The system converts B into Montgomery form prior to initiating the process described above. The system replaces the multi-precision multiplication operations described above with Montgomery multiplication operations, and convert the final responses in V out of Montgomery form prior to returning the answer. To be sure, other algorithmic and methodological solutions for computing over large numbers that utilize the present disclosure are contemplated and thus the present disclosure is not limited to the examples provided herein.


Given some base B and exponents E, the system determines whether the multi-precision exponentiation algorithm will be faster than performing the exponentiation directly on each exponent (i.e., with no lookup table). The system also determines what value of d is optimal by estimating a number of multi-precision operations (such as multiplication) that will be required with d={0, 1, 2, . . . dmax}, where d=0 corresponds to no lookup table and dmax is the value of the exponent size parameter based upon memory limitations. The number of values in the lookup table, not counting the 1's in the first column, will be equal to R×(2d−1). Therefore, some embodiments impose a limit upon d so that the lookup table does not grow too large and run the program out of memory (e.g., the available memory for a multi-precision exponentiation algorithm application is exhausted). This exhaustion of memory includes memory dedicated for the multi-precision exponentiation algorithm application or available free memory of a computing device that is executing the multi-precision exponentiation algorithm application.


When considering the multi-precision multiplications involved with each value of d, the system segments d values into two categories: squarings (i.e., multiplication of a number with itself) and non-squarings (i.e., multiplication of two different numbers). It will be understood that in practice, a multi-precision squaring operation is significantly faster than a non-squaring operation.


In some embodiments, W is an index of a highest bit set in E (i.e., W=|log Emax|). With d=0, a number of multi-precision squarings required for each exponent is equal to W, and the number of non-squarings averages








log





E

2

.





With d>0, computing lookup table requires







W






2

d
-
1



d





squarings and







W


(


2

d
-
1


-
1

)


d





non-squarings as a one-time cost, and then an average of







W


(


2

d
-
1


-
1

)



d






2

d
-
1








non-squarings to compute each exponent.


Given n, W, and dmax, the system takes the optimal value of d to be the value that minimizes a total time given by these estimates. The system utilizes this value to determine how large a lookup table would be. The system skips generating a lookup table and calculate exponentiations directly when d=0 (e.g., is optimal).


In this manner, efficient computation of multi-precision exponentiation, in cases where the same base must be raised to many different exponents, may be completed using the foregoing methods and systems.



FIG. 1 is a method of the present disclosure for utilizing a multi-precision exponentiation algorithm to perform exponentiation of a large base number (e.g., above 64 bits) across a plurality of exponents.


In some embodiments, the method includes a step 102 of identifying a large base number within a computational process that requires exponentiation with a plurality of exponents. For example, a large base number is identified during the calculation of an analytic or generating a response to a query or a mathematical problem.


In some embodiments, this step includes determining that a base number has a size that exceeds a size threshold. For example, the size threshold is any number greater than 64 bits.


Once a large base number has been identified, the method includes an optional step of determining if a time required for using the multi-precision exponentiation algorithm to exponentiate the base number is less than a time required to perform exponentiation of the base number directly using exponents. In some embodiments, the multi-precision exponentiation algorithm is used only when the time required for using the multi-precision exponentiation algorithm to exponentiate the base number is less than the time required to perform exponentiation of the base number directly using exponents.


In sum, step 104 includes determining the system should use the multi-precision exponentiation algorithm described herein to compute all exponentiations at once, versus using a standard exponentiation algorithm to compute each exponent individually.


Stated otherwise, this step determines if the use of a lookup table is preferred over calculating exponential values on the fly (e.g., at computational runtime). This includes determining a time to generate the lookup table versus calculating exponential values on the fly.


If the use of the lookup table is preferred, the method includes a step 106 of applying a multi-precision exponentiation algorithm to a base number. Again, the multi-precision exponentiation algorithm comprises the use of a pre-generated lookup table used to perform calculations on the base number. A specified above, the lookup table comprises pre-calculated exponentiated values of the base number. For example, the base number is raised exponentially using a range of exponent values of the specified exponents. These resultant values are stored in the lookup table. These values are obtained when performing a desired calculation on the base number. Rather than having to exponentiate the base number a plurality of times during performance of the calculation, the system obtains pre-exponentiated values from the lookup table as needed. For example, assume the desired exponent is the binary number 10101010, the maximum exponent size Emax is 8, and the window side d is 4, for an arbitrary base number B. This means that the lookup table will contain both B10100000 and B1010, so the desired result B10101010 is obtained in one multiplication operation. Without using the lookup table, seven squaring operations are necessary to obtain B10000000, plus three multiplication operations (with B100000, B1000, and B10) to obtain the same result.


In some embodiments, the method includes a step 108 of returning at least one calculated value for the base number.


In one or more embodiments, the method includes optional steps such as converting the base number into a form suitable for use in an encryption algorithm.



FIG. 2 is a flowchart of an example sub-method for determining if a time required for using the multi-precision exponentiation algorithm is less than a time required to perform the calculations of the base number directly using exponents (as noted in step 104 of FIG. 1).


In various embodiments, the method of FIG. 2 includes a step 202 of identifying an exponent size parameter. As mentioned above, the exponent size parameter is greater than zero and is equal to or less than the largest exponent value in the set of exponents. In some instances, the exponent size parameter is selected to prevent the lookup table from growing to a point where memory of a multi-precision exponentiation algorithm application is exhausted.


Next, the method includes a step 204 of estimating a number of multi-precision operations required using the exponent size parameter. In some instances, the method includes a step 206 of identifying which of the multi-precision operations are squarings or non-squarings, as well as a step 208 of determining a one-time cost using calculation from the squarings and the non-squarings.


In some embodiments, the method includes optional steps of identifying an index of a highest bit set in the exponents.


In one or more embodiments, the method includes a step 210 of calculating an optimal value of the window size parameter. This calculation is a function of the one-time cost, the index of a highest bit set, and a highest possible value for the exponent size parameter.



FIG. 3 is a diagrammatic representation of an example machine in the form of a computer system 1, within which a set of instructions for causing the machine to perform any one or more of the methodologies discussed herein may be executed. In various example embodiments, the machine operates as a standalone device or may be connected (e.g., networked) to other machines. In a networked deployment, the machine may operate in the capacity of a server or a client machine in a server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine may be a base station, a personal computer (PC), a tablet PC, a set-top box (STB), a personal digital assistant (PDA), a cellular telephone, a portable music player (e.g., a portable hard drive audio device such as an Moving Picture Experts Group Audio Layer 3 (MP3) player), a web appliance, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.


The example computer system 1 includes a processor or multiple processors 5 (e.g., a central processing unit (CPU), a graphics processing unit (GPU), or both), and a main memory 10 and static memory 15, which communicate with each other via a bus 20. The computer system 1 may further include a video display 35 (e.g., a liquid crystal display (LCD)). The computer system 1 may also include an alpha-numeric input device(s) 30 (e.g., a keyboard), a cursor control device (e.g., a mouse), a voice recognition or biometric verification unit (not shown), a drive unit 37 (also referred to as disk drive unit), a signal generation device 40 (e.g., a speaker), and a network interface device 45. The computer system 1 may further include a data encryption module (not shown) to encrypt data.


The drive unit 37 includes a computer or machine-readable medium 50 on which is stored one or more sets of instructions and data structures (e.g., instructions 55) embodying or utilizing any one or more of the methodologies or functions described herein. The instructions 55 may also reside, completely or at least partially, within the main memory 10 and/or within the processors 5 during execution thereof by the computer system 1. The main memory 10 and the processors 5 may also constitute machine-readable media.


The instructions 55 may further be transmitted or received over a network via the network interface device 45 utilizing any one of a number of well-known transfer protocols (e.g., Hyper Text Transfer Protocol (HTTP)). While the machine-readable medium 50 is shown in an example embodiment to be a single medium, the term “computer-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database and/or associated caches and servers) that store the one or more sets of instructions. The term “computer-readable medium” shall also be taken to include any medium that is capable of storing, encoding, or carrying a set of instructions for execution by the machine and that causes the machine to perform any one or more of the methodologies of the present application, or that is capable of storing, encoding, or carrying data structures utilized by or associated with such a set of instructions. The term “computer-readable medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical and magnetic media, and carrier wave signals. Such media may also include, without limitation, hard disks, floppy disks, flash memory cards, digital video disks, random access memory (RAM), read only memory (ROM), and the like. The example embodiments described herein may be implemented in an operating environment comprising software installed on a computer, in hardware, or in a combination of software and hardware.


Not all components of the computer system 1 are required and thus portions of the computer system 1 can be removed if not needed, such as Input/Output (I/O) devices (e.g., input device(s) 30). One skilled in the art will recognize that the Internet service may be configured to provide Internet access to one or more computing devices that are coupled to the Internet service, and that the computing devices may include one or more processors, buses, memory devices, display devices, input/output devices, and the like. Furthermore, those skilled in the art may appreciate that the Internet service may be coupled to one or more databases, repositories, servers, and the like, which may be utilized in order to implement any of the embodiments of the disclosure as described herein.


As used herein, the term “module” may also refer to any of an application-specific integrated circuit (“ASIC”), an electronic circuit, a processor (shared, dedicated, or group) that executes one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that provide the described functionality.


The corresponding structures, materials, acts, and equivalents of all means or step plus function elements in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of the present technology has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the present technology in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the present technology. Exemplary embodiments were chosen and described in order to best explain the principles of the present technology and its practical application, and to enable others of ordinary skill in the art to understand the present technology for various embodiments with various modifications as are suited to the particular use contemplated.


Aspects of the present technology are described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the present technology. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.


These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.


The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.


The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present technology. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.


In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular embodiments, procedures, techniques, etc. in order to provide a thorough understanding of the present invention. However, it will be apparent to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details.


Reference throughout this specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases “in one embodiment” or “in an embodiment” or “according to one embodiment” (or other phrases having similar import) at various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. Furthermore, depending on the context of discussion herein, a singular term may include its plural forms and a plural term may include its singular form. Similarly, a hyphenated term (e.g., “on-demand”) may be occasionally interchangeably used with its non-hyphenated version (e.g., “on demand”), a capitalized entry (e.g., “Software”) may be interchangeably used with its non-capitalized version (e.g., “software”), a plural term may be indicated with or without an apostrophe (e.g., PE's or PEs), and an italicized term (e.g., “N+1”) may be interchangeably used with its non-italicized version (e.g., “N+1”). Such occasional interchangeable uses shall not be considered inconsistent with each other.


Also, some embodiments may be described in terms of “means for” performing a task or set of tasks. It will be understood that a “means for” may be expressed herein in terms of a structure, such as a processor, a memory, an I/O device such as a camera, or combinations thereof. Alternatively, the “means for” may include an algorithm that is descriptive of a function or method step, while in yet other embodiments the “means for” is expressed in terms of a mathematical formula, prose, or as a flow chart or signal diagram.


The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.


If any disclosures are incorporated herein by reference and such incorporated disclosures conflict in part and/or in whole with the present disclosure, then to the extent of conflict, and/or broader disclosure, and/or broader definition of terms, the present disclosure controls. If such incorporated disclosures conflict in part and/or in whole with one another, then to the extent of conflict, the later-dated disclosure controls.


The terminology used herein can imply direct or indirect, full or partial, temporary or permanent, immediate or delayed, synchronous or asynchronous, action or inaction. For example, when an element is referred to as being “on,” “connected” or “coupled” to another element, then the element can be directly on, connected or coupled to the other element and/or intervening elements may be present, including indirect and/or direct variants. In contrast, when an element is referred to as being “directly connected” or “directly coupled” to another element, there are no intervening elements present. The description herein is illustrative and not restrictive. Many variations of the technology will become apparent to those of skill in the art upon review of this disclosure.


While various embodiments have been described above, it should be understood that they have been presented by way of example only, and not limitation. The descriptions are not intended to limit the scope of the invention to the particular forms set forth herein. To the contrary, the present descriptions are intended to cover such alternatives, modifications, and equivalents as may be included within the spirit and scope of the invention as defined by the appended claims and otherwise appreciated by one of ordinary skill in the art. Thus, the breadth and scope of a preferred embodiment should not be limited by any of the above-described exemplary embodiments.

Claims
  • 1. A method, comprising: determining, via a processor, that a base number has a size that exceeds a size threshold, wherein the size threshold is greater than 64 bits; andapplying, via the processor, a multi-precision exponentiation algorithm to the base number, the multi-precision exponentiation algorithm comprising a pre-generated lookup table used to perform calculations on the base number, the pre-generated lookup table comprising pre-calculated exponentiated values of the base number, wherein the pre-generated lookup table is stored in a memory, the memory being coupled to the processor;wherein an exponent size parameter of each of the pre-calculated exponentiated values in the pre-generated lookup table is selected by the processor based on a size limitation of a memory portion dedicated in the memory to the multi-precision exponentiation algorithm to prevent the pre-generated lookup table from growing to a size at which the memory portion dedicated in the memory to the multi-precision exponentiation algorithm is exhausted.
  • 2. The method according to claim 1, further comprising: identifying the base number as having a size that is more than 64 bits; anddetermining if a time required for using the multi-precision exponentiation algorithm is less than a time required to perform the calculations of the base number directly using exponents,wherein the multi-precision exponentiation algorithm is used only when the time required for using the multi-precision exponentiation algorithm is less than the time required to perform the calculations of the base number directly using exponents.
  • 3. The method according to claim 2, wherein determining if a time required for using the multi-precision exponentiation algorithm is less than a time required to perform the calculations of the base number directly using exponents further comprises: identifying the exponent size parameter;estimating a number of multi-precision operations required using the exponent size parameter;identifying which of the number of multi-precision operations are squarings or non-squarings; anddetermining a one-time cost using calculations from the squarings and the non-squarings.
  • 4. The method according to claim 1, further comprising selecting exponents that will be used to exponentiate the base number.
  • 5. The method according to claim 4, further comprising identifying an index of a highest bit set in the exponents.
  • 6. The method according to claim 1, further comprising converting the base number into a form suitable for use in an encryption algorithm.
  • 7. The method according to claim 5, wherein an optimal value of the exponent size parameter is a function of a number of the exponents, the index of the highest bit set, and a highest possible value for the exponent size parameter.
  • 8. A system, comprising: a processor; anda memory for storing executable instructions, the memory being coupled to the processor, the processor executing the instructions to:determine if a time required for using a multi-precision exponentiation algorithm to exponentiate a base number is less than a time required to perform exponentiation of the base number directly using exponents;apply a multi-precision exponentiation algorithm to the base number, wherein the multi-precision exponentiation algorithm comprises a pre-generated lookup table used to perform calculations on the base number, the pre-generated lookup table comprising pre-calculated exponentiated values of the base number, wherein the pre-generated lookup table is stored in the memory;wherein an exponent size parameter of each of the pre-calculated exponentiated values in the pre-generated lookup table is selected by the processor based on a size limitation of a memory portion dedicated in the memory to the multi-precision exponentiation algorithm to prevent the pre-generated lookup table from growing to a size at which the memory portion dedicated in the memory to the multi-precision exponentiation algorithm is exhausted; andreturn a calculated value for the base number.
  • 9. The system according to claim 8, wherein the processor is further configured to identify a base number with a size that is more than 64 bits.
  • 10. The system according to claim 9, wherein the pre-generated lookup table comprises raised values corresponding to exponentiation of the base number with the exponents.
  • 11. The system according to claim 10, wherein the processor is further configured to select the exponents that will be used to exponentiate the base number when generating the pre-generated lookup table.
  • 12. The system according to claim 8, wherein the processor is further configured to convert the base number into a form suitable for use in an encryption algorithm.
CROSS REFERENCE TO RELATED APPLICATIONS

This application claims the benefit and priority of U.S. Provisional Application Ser. No. 62/448,916, filed on Jan. 20, 2017; U.S. Provisional Application Ser. No. 62/448,883, filed on Jan. 20, 2017; U.S. Provisional Application 62/448,885, filed on Jan. 20, 2017; and U.S. Provisional Application Ser. No. 62/462,818, filed on Feb. 23, 2017, all of which are hereby incorporated by reference herein, including all references and appendices, for all purposes.

US Referenced Citations (149)
Number Name Date Kind
5732390 Katayanagi et al. Mar 1998 A
6178435 Schmookler Jan 2001 B1
6745220 Hars Jun 2004 B1
6748412 Ruehle Jun 2004 B2
6910059 Lu Jun 2005 B2
7712143 Comlekoglu May 2010 B2
7937270 Smaragdis et al. May 2011 B2
8515058 Gentry Aug 2013 B1
8565435 Gentry et al. Oct 2013 B2
8832465 Gulati et al. Sep 2014 B2
9059855 Johnson et al. Jun 2015 B2
9094378 Yung et al. Jul 2015 B1
9189411 Mckeen et al. Nov 2015 B2
9215219 Krendelev et al. Dec 2015 B1
9288039 Monet et al. Mar 2016 B1
9491111 Roth et al. Nov 2016 B1
9503432 El Emam et al. Nov 2016 B2
9514317 Martin et al. Dec 2016 B2
9565020 Camenisch et al. Feb 2017 B1
9577829 Roth et al. Feb 2017 B1
9652609 Kang et al. May 2017 B2
9846787 Johnson et al. Dec 2017 B2
9852306 Cash et al. Dec 2017 B2
9942032 Kornaropoulos et al. Apr 2018 B1
9946810 Trepetin et al. Apr 2018 B1
9973334 Hibshoosh et al. May 2018 B2
10027486 Liu Jul 2018 B2
10055602 Deshpande et al. Aug 2018 B2
10073981 Arasu et al. Sep 2018 B2
10075288 Khedr et al. Sep 2018 B1
10129028 Kamakari et al. Nov 2018 B2
10148438 Evancich et al. Dec 2018 B2
10181049 El Defrawy et al. Jan 2019 B1
10210266 Antonopoulos et al. Feb 2019 B2
10235539 Ito et al. Mar 2019 B2
10255454 Kamara et al. Apr 2019 B2
10333715 Chu et al. Jun 2019 B2
10375042 Chaum Aug 2019 B2
10396984 French et al. Aug 2019 B2
10423806 Cerezo Sanchez Sep 2019 B2
10489604 Yoshino et al. Nov 2019 B2
10496631 Tschudin et al. Dec 2019 B2
20020032712 Miyasaka Mar 2002 A1
20020104002 Nishizawa et al. Aug 2002 A1
20030059041 MacKenzie et al. Mar 2003 A1
20050008152 MacKenzie Jan 2005 A1
20050076024 Takatsuka et al. Apr 2005 A1
20050259817 Ramzan et al. Nov 2005 A1
20070053507 Smaragdis et al. Mar 2007 A1
20070095909 Chaum May 2007 A1
20070140479 Wang et al. Jun 2007 A1
20070143280 Wang et al. Jun 2007 A1
20090037504 Hussain Feb 2009 A1
20090193033 Ramzan et al. Jul 2009 A1
20090268908 Bikel et al. Oct 2009 A1
20090279694 Takahashi et al. Nov 2009 A1
20100205430 Chiou et al. Aug 2010 A1
20110026781 Osadchy et al. Feb 2011 A1
20110107105 Hada May 2011 A1
20110110525 Gentry May 2011 A1
20110243320 Halevi et al. Oct 2011 A1
20110283099 Nath et al. Nov 2011 A1
20120039469 Meuller et al. Feb 2012 A1
20120054485 Tanaka et al. Mar 2012 A1
20120066510 Weinman Mar 2012 A1
20120201378 Nabeel et al. Aug 2012 A1
20130010950 Kerschbaum Jan 2013 A1
20130051551 El Aimani Feb 2013 A1
20130054665 Felch Feb 2013 A1
20130170640 Gentry Jul 2013 A1
20130191650 Balakrishnan et al. Jul 2013 A1
20130195267 Alessio et al. Aug 2013 A1
20130216044 Gentry et al. Aug 2013 A1
20130230168 Takenouchi Sep 2013 A1
20130246813 Mori et al. Sep 2013 A1
20130326224 Yavuz Dec 2013 A1
20130339722 Krendelev et al. Dec 2013 A1
20130339751 Sun et al. Dec 2013 A1
20130346741 Kim et al. Dec 2013 A1
20130346755 Nguyen et al. Dec 2013 A1
20140189811 Taylor et al. Jul 2014 A1
20140233727 Rohloff et al. Aug 2014 A1
20140355756 Iwamura et al. Dec 2014 A1
20150100785 Joye et al. Apr 2015 A1
20150100794 Joye et al. Apr 2015 A1
20150205967 Naedele et al. Jul 2015 A1
20150215123 Kipnis et al. Jul 2015 A1
20150227930 Quigley et al. Aug 2015 A1
20150229480 Joye et al. Aug 2015 A1
20150244517 Nita Aug 2015 A1
20150248458 Sakamoto Sep 2015 A1
20150304736 Lal et al. Oct 2015 A1
20150358152 Ikarashi et al. Dec 2015 A1
20160004874 Ioannidis et al. Jan 2016 A1
20160072623 Joye et al. Mar 2016 A1
20160105402 Kupwade-Patil et al. Apr 2016 A1
20160105414 Bringer et al. Apr 2016 A1
20160119346 Chen et al. Apr 2016 A1
20160140348 Nawaz et al. May 2016 A1
20160179945 Lastra Diaz et al. Jun 2016 A1
20160182222 Rane et al. Jun 2016 A1
20160323098 Bathen Nov 2016 A1
20160335450 Yoshino et al. Nov 2016 A1
20160344557 Chabanne et al. Nov 2016 A1
20160350648 Gilad-Bachrach et al. Dec 2016 A1
20170070340 Hibshoosh et al. Mar 2017 A1
20170070351 Yan Mar 2017 A1
20170099133 Gu et al. Apr 2017 A1
20170134158 Pasol et al. May 2017 A1
20170185776 Robinson et al. Jun 2017 A1
20170264426 Joye et al. Sep 2017 A1
20180091466 Friedman et al. Mar 2018 A1
20180139054 Chu et al. May 2018 A1
20180198601 Laine et al. Jul 2018 A1
20180204284 Cerezo Sanchez Jul 2018 A1
20180212751 Williams et al. Jul 2018 A1
20180212752 Williams et al. Jul 2018 A1
20180212753 Williams Jul 2018 A1
20180212754 Williams et al. Jul 2018 A1
20180212755 Williams et al. Jul 2018 A1
20180212756 Carr Jul 2018 A1
20180212757 Carr Jul 2018 A1
20180212758 Williams et al. Jul 2018 A1
20180212759 Williams et al. Jul 2018 A1
20180212775 Williams Jul 2018 A1
20180212933 Williams Jul 2018 A1
20180234254 Camenisch et al. Aug 2018 A1
20180267981 Sirdey Sep 2018 A1
20180270046 Carr Sep 2018 A1
20180276417 Cerezo Sanchez Sep 2018 A1
20180343109 Koseki et al. Nov 2018 A1
20180359097 Lindell Dec 2018 A1
20180373882 Veugen Dec 2018 A1
20190013950 Becker et al. Jan 2019 A1
20190042786 Williams et al. Feb 2019 A1
20190108350 Bohli et al. Apr 2019 A1
20190158272 Chopra et al. May 2019 A1
20190229887 Ding et al. Jul 2019 A1
20190238311 Zheng Aug 2019 A1
20190251553 Ma et al. Aug 2019 A1
20190251554 Ma et al. Aug 2019 A1
20190253235 Zhang et al. Aug 2019 A1
20190260585 Kawai et al. Aug 2019 A1
20190280880 Zhang et al. Sep 2019 A1
20190312728 Poeppelmann Oct 2019 A1
20190327078 Zhang et al. Oct 2019 A1
20190334716 Kocsis et al. Oct 2019 A1
20190349191 Soriente et al. Nov 2019 A1
20190371106 Kaye Dec 2019 A1
Foreign Referenced Citations (10)
Number Date Country
2873186 Mar 2018 EP
5680007 Mar 2015 JP
101386294 Apr 2014 KR
2014105160 Jul 2014 WO
2015094261 Jun 2015 WO
2016003833 Jan 2016 WO
2016018502 Feb 2016 WO
WO2018136801 Jul 2018 WO
WO2018136804 Jul 2018 WO
WO2018136811 Jul 2018 WO
Non-Patent Literature Citations (21)
Entry
“International Search Report” and “Written Opinion of the International Searching Authority,” Patent Cooperation Treaty Application No. PCT/US2018/014535, dated Apr. 19, 2018, 9 pages.
“International Search Report” and “Written Opinion of the International Searching Authority,” Patent Cooperation Treaty Application No. PCT/US2018/014530, dated Apr. 23, 2018, 7 pages.
“International Search Report” and “Written Opinion of the International Searching Authority,” Patent Cooperation Treaty Application No. PCT/US2018/014551, dated Apr. 24, 2018, 8 pages.
Petition to Insitute Derivation Proceeding Pursuant to 35 USC 135; Case No. DER2019-00009, U.S. Patent and Trademark Office Patent Trial and Appeal Board; Jul. 26, 2019, 272 pages. (2 PDFs).
SCAMP Working Paper L29/11, “A Woods Hole Proposal Using Striping,” Dec. 2011, 14 pages.
O'Hara, Michael James, “Shovel-ready Private Information Retrieval,” Dec. 2015, 4 pages.
Carr, Benjamin et al., “Proposed Laughing Owl,” NSA Technical Report, Jan. 5, 2016, 18 pages.
Williams, Ellison Anne et al., “Wideskies: Scalable Private Information Retrieval,” 14 pages.
Carr, Benjamin et al., “A Private Stream Search Technique,” NSA Technical Report, Dec. 1, 2015, 18 pages.
Drucker et al., “Paillier-encrypted databases with fast aggregated queries,” 2017 14th IEEE Annual Consumer Communications & Networking Conference (CCNC), Jan. 8-11, 2017, pp. 848-853.
Tu et al.,, “Processing Analytical Queries over Encrypted Data,” Proceedings of the VLDB Endowment, vol. 6, Issue No. 5, Mar. 13, 2013. pp. 289-300.
Boneh et al., “Private Database Queries Using Somewhat Homomorphic Encryption”, Cryptology ePrint Archive: Report 2013/422, Standford University [online], Jun. 27, 2013, [retrieved on Dec. 9, 2019], 22 pages.
Chen et al., “Efficient Multi-Key Homomorphic Encryption with Packed Ciphertexts with Application to Oblivious Neural Network Inference”, CCS'19 Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, May 19, 2019. pp. 395-412.
Armknecht et al., “A Guide to Fully Homomorphic Encryption” IACR Cryptology ePrint Archive: Report 2015/1192 [online], Dec. 14, 2015, 35 pages.
Bayar et al., “A Deep Learning Approach to Universal Image Manipulation Detection Using a New Convolutional Layer”, IH&MMSec 2016, Jun. 20-22, 2016. pp. 5-10.
Juvekar et al. “Gazelle: A Low Latency Framework for Secure Neural Network Inference”, 27th USENIX Security Symposium, Aug. 15-17, 2018. pp. 1650-1668.
Bösch et al., “ SOFIR: Securely Outsourced Forensic Recognition,” 2014 IEEE International Conference on Acoustic, Speech and Signal Processing (ICASSP), IEEE 978-1-4799-2893-4/14, 2014, pp. 2713-2717.
Waziri et al., “Big Data Analytics and Data Security in the Cloud via Fullly Homomorphic Encryption,” World Academy of Science, Engineering and Technology International Journal of Computer, Electrical, Automation, Control and Information Engineering, vol. 9, No. 3, 2015, pp. 744-753.
Bajpai et al., “A Fully Homomorphic Encryption Implementation on Cloud Computing,” International Journal of Information & Computation Technology, ISSN 0974-2239 vol. 4, No. 8, 2014, pp. 811-816.
Viejo et al., “Asymmetric homomorphisms for secure aggregation in heterogeneous scenarios,” Information Fusion 13, Elsevier B.V., Mar. 21, 2011, pp. 285-295.
Patil et al, “Big Data Privacy Using Fully Homomorphic Non-Deterministic Encryption,” IEEE 7th International Advance Computing Conference, Jan. 5-7, 2017, 15 pages.
Related Publications (1)
Number Date Country
20180224882 A1 Aug 2018 US
Provisional Applications (4)
Number Date Country
62448916 Jan 2017 US
62448883 Jan 2017 US
62448885 Jan 2017 US
62462818 Feb 2017 US