The present invention relates to support vector machines, and more specifically, to optimize the computations for support vector machines.
In machine learning, support vector machines (SVMs, also support vector networks) are supervised learning models with associated learning algorithms that analyze data and recognize patterns, used for classification and regression analysis. Given a set of training examples, each marked as belonging to one of two categories, an SVM training algorithm builds a model that assigns new examples into one category or the other, making it a deterministic binary linear classifier. An SVM model is a representation of the examples as points in space, mapped so that the examples of the separate categories are divided by a clear gap that is as wide as possible, yet allowing some points to lie on the opposite side and penalized for that. New examples are then mapped into that same space and predicted to belong to a category based on which side of the gap they fall on.
According to one embodiment, a method, by a computer, of solving a support vector machine problem on table J, defined as the join of two tables T1 and T2, without explicitly joining the tables T1 and T2 is provided, in which the table T1 has m rows (piT, uiT), i=1, . . . , m, and the table T2 has n rows (qjT, vjT), j=1, . . . , n. The method includes providing a primal optimization problem over a join of the tables T1 and T2 and obtaining a modified optimization problem from the primal optimization problem. The computer reduces penalty variables in the modified optimization problem by replacing the penalty variables in a form of ξij for each (i,j)∈IJ with the penalty variables in a form of ξij=ηi+ζj. The computer obtains a compact form of the modified optimization problem in which the compact form comprises the penalty variables in the form of ξij=ηi+ζj. The computer solves the compact form of the modified optimization problem.
Additional features and advantages are realized through the techniques of the embodiments of the present invention. Other embodiments and aspects of the invention are described in detail herein and are considered a part of the claimed invention. For a better understanding of the invention with the advantages and the features, refer to the description and to the drawings.
The subject matter which is regarded as the invention is particularly pointed out and distinctly claimed in the claims at the conclusion of the specification. The forgoing and other features, and advantages of the invention are apparent from the following detailed description taken in conjunction with the accompanying drawings in which:
The support vector machines (SVM) have become a very important tool for the classification problem. Computing an SVM amounts to solving a certain optimization problem. The SVM optimization problem is posed with respect to a set of labeled examples given explicitly. In real-life databases, the data is often distributed over various tables. Even if the data is given in a single table, there are often external sources of data that can improve the accuracy of a classifier if incorporated in the classifier. For example, a given table providing attributes of individuals that have to be classified may include the town where the individual resides but no attributes of that town. An external source may provide various attributes of towns or transactions that took place in various towns, which may be relevant to the classification of individuals. Thus, it is desirable to build a classifier that takes some of these attributes or transactions into account. This hypothesis calls for joining the tables on the town column.
To apply a standard SVM algorithm when attributes are distributed over tables, one has to first to join the tables. However, joining tables explicitly may not be possible due to the size of the product. Thus, the question is whether it is possible to obtain an SVM for the join without generating the table explicitly. Here, it is shown how this can be done for the join of two tables. In general, the size of the join of two tables can be quadratic in the terms of the sizes of the joined tables. Embodiments are configured to modify standard SVM problems as discussed further below (in algorithms).
Turning to the figures,
Generally, in terms of hardware architecture, the computer 100 may include one or more processors 110, computer readable storage memory 120, and one or more input and/or output (I/O) devices 170 that are communicatively coupled via a local interface (not shown). The local interface can be, for example but not limited to, one or more buses or other wired or wireless connections, as is known in the art. The local interface may have additional elements, such as controllers, buffers (caches), drivers, repeaters, and receivers, to enable communications. Further, the local interface may include address, control, and/or data connections to enable appropriate communications among the aforementioned components.
The processor 110 is a hardware device for executing software that can be stored in the memory 120. The processor 1510 can be virtually any custom made or commercially available processor, a central processing unit (CPU), a data signal processor (DSP), or an auxiliary processor among several processors associated with the computer 100, and the processor 110 may be a semiconductor based microprocessor (in the form of a microchip) or a macroprocessor.
The computer readable memory 1520 can include any one or combination of volatile memory elements (e.g., random access memory (RAM), such as dynamic random access memory (DRAM), static random access memory (SRAM), etc.) and nonvolatile memory elements (e.g., ROM, erasable programmable read only memory (EPROM), electronically erasable programmable read only memory (EEPROM), programmable read only memory (PROM), tape, compact disc read only memory (CD-ROM), disk, diskette, cartridge, cassette or the like, etc.). Note that the memory 120 can have a distributed architecture, where various components are situated remote from one another, but can be accessed by the processor(s) 110.
The software in the computer readable memory 120 may include one or more separate programs, each of which comprises an ordered listing of executable instructions for implementing logical functions. The software in the memory 120 includes a suitable operating system (O/S) 150, compiler 140, source code 130, and one or more applications 160 of the exemplary embodiments. As illustrated, the application 160 comprises numerous functional components for implementing the features, processes, methods, functions, and operations of the exemplary embodiments.
The operating system 150 may control the execution of other computer programs, and provides scheduling, input-output control, file and data management, memory management, and communication control and related services.
The software application 160 may be a source program, executable program (object code), script, or any other entity comprising a set of instructions to be performed. When a source program, then the program is usually translated via a compiler (such as the compiler 140), assembler, interpreter, or the like, which may or may not be included within the memory 120, so as to operate properly in connection with the O/S 1550. Furthermore, the application 160 can be written as (a) an object oriented programming language, which has classes of data and methods, or (b) a procedure programming language, which has routines, subroutines, and/or functions.
The I/O devices 170 may include input devices (or peripherals) such as, for example but not limited to, a mouse, keyboard, scanner, microphone, camera, etc. Furthermore, the I/O devices 150 may also include output devices (or peripherals), for example but not limited to, a printer, display, etc. Finally, the I/O devices 170 may further include devices that communicate both inputs and outputs, for instance but not limited to, a NIC or modulator/demodulator (for accessing remote devices, other files, devices, systems, or a network), a radio frequency (RF) or other transceiver, a telephonic interface, a bridge, a router, etc. The I/O devices 170 also include components for communicating over various networks, such as the Internet or an intranet. The I/O devices 170 may be connected to and/or communicate with the processor 110 utilizing Bluetooth connections and cables (via, e.g., Universal Serial Bus (USB) ports, serial ports, parallel ports, Fire Wire, HDMI (High-Definition Multimedia Interface), etc.).
Additionally, the computer 100 may include a database 180 stored in memory 120. The database 180 may include various tables such as table T1 and T2 discussed herein. Also, new table J may be stored in the database 180.
Referring now to
Subsection headings are provided below for explanation purposes and for ease of understanding. The sub-section headings are not meant to limit the scope of the present disclosure. According to embodiments, the software application 160 running on the processor 110 of computer 100 is configured to execute each of the algorithms (including equations and problems) discussed herein (including the subsections below).
1. Standard SVM
We first review the standard SVM problem. The input table consists of m “examples” given as feature vectors xi∈d and corresponding class labels yi∈{−1, 1}, i=1, . . . , m.
The Primal Problem
The primal SVM optimization problem is the following:
Minimizew,b,ξ½∥w∥2+C·Σi=1mξi subject to yixiTw−yib+ξi≧1(i=1, . . . , m)ξi≧0(i=1, . . . , m). (1)
Note that w is the unknown vector defining the orientation of a hyperplane, b is a scalar, and ξ is a vector of penalty variables.
The Dual Problem
The Lagrangian function of the problem in (1) is the following:
Note that C is chosen as an arbitrary coefficient such as 1. Also, note that α is a vector of dual variables/multipliers.
In the following problem, an optimal solution must satisfy the constraints of (1) and also αi=0 for every i such that yixiTw−yib+ξi>1:
Minimizew,b,ξ{maxα{L(w, b, ξ; α): α≧0}:ξ≧0}}. (3)
It follows that (3) is equivalent to (1). Due to the convexity in terms of (w, b, ξ) and linearity in terms of α, the optimal value of (3) is equal to the optimal value of the following:
Maximizeα{minw,bξ{L(w, b, ξ; α):ξ≧0}:α≧0}}. (4)
Let α≧0 be fixed for a moment. If Σi=1myiαi≠0, then bΣi=1myiαi at is not bounded from below. Similarly, if αi>C, then ξi(C−αi) is not bounded from below when ξi>0. Therefore, an optimal α for (4) must satisfy
Σi=1mαiyi=0 and αi≦C(i=1, . . . , m).
Next, the unique w that minimizes L(w, b, ζ; α) is
Finally, if ξ≧0 minimizes L(w, b, ξ; α), then for every i such that αi<C, necessarily ξi=0, and hence
Thus, the problem in (4) is equivalent to the following, which can be viewed as the dual problem:
Minimizeα½ΣijyiyjxiTxjαiαj−Σiαi subject to Σi=1myiαi=0 0≦αi≦C (7)
2. SVM on a Join of Two Tables (Executed by the Software Application 160)
2.1 Formulation
We now consider a problem with two tables, T1 and T2. The table T1 has m rows (pjT, uiT), i=1, . . . , m, and the table T2 has n rows (qjT, vjT),j=1, . . . , n, with columns as follows. (Note that piT and uiT are attributes of table T1 and that qjT and vjT are attributes of table T2.) The attributes that are represented by the columns of these tables are of three types described below. Denote by P the set of attributes represented by the pis, and by Q the set of attributes represented by the qjs. The set U of attributes represented by the uiS is the same as the set V of attributes represented by the vjs (these are the common attributes of the two tables). Note that the s is for plural. The class labels yt are associated with the rows of T1. The (universal) join of T1 and T2 is a new table J, consisting of |P|+|U|+|Q| columns, defined as follows. For each i,i=1, . . . , m, if there is no j such that ujT=vjT, then J has a row xi0T=(pjT, ujT, 0T); otherwise, J has rows of the form xijT=(pjT, ujT, qjT) for every pair (i,j) such that ujT=vjT. Denote by wP, wU and wQ the projections of the (unknown) vector w on the sets P, U and Q, respectively. Also, denote
I0={(i, 0):(∀j)(ui≠vj)}
and
IJ=I0∪{(i,j):ui=vj}.
(Note that I0 is a set and that IJ is a set) Thus, the explicit form of the primal problem over the join is:
Minimizew,b,ξ½∥w∥2+C·Σ(i,j)∈IJξij subject to yixijTw−yib+ξij≧1 ((i,j)∈IJ) ξij≧0 ((i,j)∈IJ) (8)
The size of the latter (i.e., equation (8)) may be too large, depending on the size of the set IJ. Our goal is to solve the SVM problem on J without explicitly generating all the rows of J. We can reformulate this problem by first observing that
x
ij
T
w=p
i
T
w
P
+u
i
T
w
U
+q
j
T
w
Q (9)
where, for convenience, we denote q0=0.
As a first step, we reduce the number of penalty variables as follows. Instead of using a penalty variable ξij for each (i,j)∈IJ, we generate those penalties in the form
ξij=ηi+ζj (10)
which makes sense in view of (9) because in an optimal solution
ξij=max{0,1−yixiTw+yib}. (11)
Thus, we obtain the following modified optimization problem:
Minimizew,b,η,ζ½∥w∥2+C·Σi=1mJ(i)·ηi+C·Σj=1nI(j)·ζj subject to yixijTw−yib+ηi+ζj≧1 ((i,j)∈IJ) ηi,ζj≧0, (12)
where J(i)=|{j:(i,j)∈IJ}| and I(j)=|{i:(i,j)∈IJ}|. In equation (10), we use the variables ηi and ζj (together which have only m+n number of penalty variables) instead of the ξij (whose number is m·n penalty vairables), i.e., instead of ξij we use ηi+ζj. This reduces the number of penalty variables from m·n (i.e., ξij) to m+n(ηiζi).
Note that the number of constraints in problem (12) may still be too large for solving the problem in practice (depending on the size of IJ), so we need to simplify the problem further.
2.2 A Linear-Size Formulation
Denote by z1, . . . , zl all the distinct values that appear as ui. For each k, k=1, . . . , l, denote
Ik={i:ui=zk}
and
Jk={j:vi=zk}.
Note that k is the index for the distinct values z. Some sets Jk may be empty. Note that the sets I1, . . . , Il partition the set {1, . . . , m} and also the sets J1, . . . , Jl are pairwise disjoint. We introduce auxiliary variables σ1, . . . , σl and τk for k=1, . . . l such that Jk≠
Consider the following system of constraints:
y
i
p
i
T
w
P
−y
i
b+η
i≧σk (i∈Ik, k=1, . . . l) qjTwQ+ζj≧τk (j∈Jk, k=1, . . . l) σk+zkTwU+τk≧1 (for k=1, . . . l such that Jk≠) σk+zkTwU≧1 (for k=1, . . . l such that Jk=). (13)
The constraints from equation 12 have been broken into four separate constraints as seen in equation (13). Note that auxiliary variables (variables σ1, . . . , σl and τk for k=1, . . . l such that Jk≠) are new variables that are introduced into the system so that constraining the auxiliary variables together with the original variables in certain ways (as discussed) results in the same set of feasible values for the original variables, yet the size of the algebraic formulation is smaller. The auxiliary variables help solve the problem because the auxiliary variables allow for a reduction in the number of constraints without changing the set of possible feasible solutions.
Proposition 2.1 A Vector w Satisfies the System
y
i
x
ij
T
w−y
i
b+ηi+ζj≧1 ((i,j)∈IJ) (14)
if and only if there exist σ1, . . . , σl and τ1, . . . , τl that together with w satisfy the system (13).
Thus, we obtain the following compact form:
Minimizew,b,η,ζ,σ,τ½∥wP∥2+½∥wU∥2½∥wQ∥2+C·Σi=1mJ(i)·ηi+C·Σj=1nI(j)·ζj subject to yipiTwP−yib+ξi−σk≧0 (i∈Ik, k=1, . . . l) qjTwQ−τk≧0 (j∈Jk, k=1, . . . l) σk+zkTwU+τk≧1 (for k=1, . . . l such that Jk≠) σk+zkTwU≧1 (for k=1, . . . l such that Jk=) ξi≧0 (i=1, . . . , m) (15)
At an optimal solution,
σk=mini∈I
and
τk=minj∈J
(Note that w, b, η, ζ, σ, τ are decision variables of equation (15).) The Lagrangian function of the latter (i.e., equation (15)) is derived as follows. Let αi≧0 be multipliers associated with the constraints:
y
i
p
i
T
w
P
−y
i
b+η
i−σk≧0 (i∈Ik, k=1, . . . l) (16)
and recall that the Iks are pairwise disjoint. Let β≧0 be multipliers associated with the constraints:
q
j
T
w
Q+ζj−τk≧0 (j∈Jk, k=1, . . . l) (17)
and let γk≧0 be multipliers associated with the constraints
σk+zkTwU+τk≧1 (for k=1, . . . l such that Jk≠) σk+zkTwU≧1 (for k=1, . . . l such that Jk=). (18)
The Lagrangian function is:
L(wP,wU,wQ,η,ζ,σ,τ;α,β,γ)=½∥wP∥2+½∥wU∥2+½∥wQ∥2+C·Σi=1mJ(i)ηi+C·Σj=1nI(j)ζj−Σk=1lΣi∈I
Rearranging terms, we obtain
L(wP,wU,wQ,η,ζ,σ,τ;α,β,γ)=(½∥wP∥2−ΣiαiyipiTwP)+(½∥wU∥2−ΣkγkzKTwU)+(½∥wQ∥2−ΣjβjqjTwQ)+Σk=1lγk−bΣiyiαi+Σiηi(CJ(i)−αi)+Σjζj(CI(j)−βj)+Σk=1lσk(Σi∈I
The dual problem is:
Maximizeα,β,γ{mixw,b,η,ζ,σ,τ{L(w,b,η,ζ,σ,τ;α,β,γ):ξ≧0}:α,β,γ, ≧0}}. (21)
Let α, β and γ be fixed for the moment. We must have
The following are necessary conditions for α, β and γ to be optimal for (21)
Σi=1myiαi=0αi≦CJ(i) (i=1, . . . , m) βj≦CI(j) (j=1, . . . , n) γk≦αi (k=1, . . . , l, i∈Ik) γk≦βj (k=1, . . . , l, j∈Jk) (25)
If the latter system of equations (i.e., the system (25)) holds, then the optimal values of η, ζ, σ and τ yield the following:
Σiηi(CJ(i)−αi)=Σjζi(CI(j)−βi)=Σk=1lσk(Σi∈I
It follows that the problem (21) is equivalent to the following dual problem:
Minimize ½Σi,i′yiyi′piTpi′αiαi′+½Σj,j′qjTqj′βjβj′+½Σk,k′zkTzk′γkγk′−Σi=1mγi subject to Σi=1myiαi=0 0≦αi≦CJ(i) (i=1, . . . , m) 0≦βi≦CI(j) (j=1, . . . , n) 0≦γk≦αi (k=1, . . . , l, i∈Ik) 0≦γk≦βj (k=1, . . . , l, j∈Jk) (27)
Note that the size of the latter (i.e., equation (27)) is linear. After the values of wP, wQ and wU have been characterized in equations (22)-(24), their values are used to express |wP∥2, ∥wQ∥2 and ∥wU∥2. This is how we get the first three terms in the objective function of the system in equation (27) because ∥wP∥2=wPTwP, etc. Note that α, β, and γ are multipliers associated with the various constraints as explained above in equations (16)-(18).
Note that (i, i′) are a pair of indexes for y where i′=1, . . . , m, that (i, i′) are a pair of indexes for α where i′=1, . . . , m, and that (i, i′) are a pair of indexes for p where i′=1, . . . , m. Also, note that (j, j′) are a pair of indexes for p where j′=1, . . . , n, and that (j, j′) are a pair of indexes for β where j′=1, . . . , n. Note that (k, k′) are a pair of indexes for z where k′=1, . . . , l, and that (k, k′) are a pair of indexes for γ where k′=1, . . . , l.
3. Extension to Nonlinear Classification (Executed by the Software Application 160)
In the standard formulation of the nonlinear SVM problem, the vectors xi are lifted to a higher-dimensional space M by a nonlinear transformation φ, and the problem is then handled as a linear SVM with examples φ(xi). The dual problem is:
Minimizez½Σijyiyiφ(xi)Tφ(xj) αiαj−Σiαi subject to Σi=1myiαi=0 0≦αi≦C. (28)
and the primal solution vector w∈M must satisfy
The products φ(xiTφ(xj) can be generated by kernels K(x, x′):
ψ(xi)Tφ(xj)=K(xi, xj). (30)
For example, the so-called quadratic kernel
implements the transformation
φ(x)=(1, 2x1, . . . , 2xd, x12, . . . , xd2, x1x2, . . . , x1, xd, x2x1, . . . , x2x1, . . . , x2xd, . . . ) (31)
so that the product φ(xi)Tφ(xj) can be calculated without calculating the individual values φ(xi) and φ(xj).
3.1 The Kernel Trick in a Join of Two Tables
In the case of a join of two tables, the examples
xijT=(piT, uiT, qjT)
give rise to the following objective function:
It follows that the linear model can be extended into a (separable) nonlinear one as follows. We consider lifting transformations φ that preserve the column structure of the table in the sense that for x=(p, u, q),
φ(xij)Tφ(xi′j′) =φP(pi)TφP(pi′)+φU(ui)TφU(ui′)+φQ(qi)TφQ(qi′).
Thus,
It follows that our problem (27) can be solved in the higher-dimensional space by modifying the objective function into the following:
The “kernel trick” can then be applied if we use transformations that are consistent with conventional kernels, KP(p, p′)=φP(p)TφP(p′), KU(u, u′)=φU(u)TφU(u′) and KQ(q, q′)=φQ(q)TφQ(q′), so the objective can be evaluated in the original space.
4. Joining more than Two Tables (Executed by the Software Application 160)
The ideas of the preceding section can be applied to joins of more than two tables. The size of the formulation depends on the complexity of the database. A simple case is when the tables are T1, . . . , Tm and only pairs (Ti, Ti+1) have common columns. Like in the case of joining two tables, we generate the compact formulation by enumerating the distinct values that appear in columns common to two adjacent tables. A similar idea can be applied in a more general setting, e.g., a tree structure, with at most three tables having common columns.
Note that the software application 160 is configured to execute each of the algorithms (including the various equations) discussed herein. Given the algorithms discussed herein, one skilled in the art may utilize a commercial support vector machine optimization software to solve the given algorithms. Also, the software application 160 may include the functions of and/or be integrated with the commercial support vector machine optimization software. The software application 160 may be control and operate the commercial support vector machine optimization software. An example of a commercial support vector machine optimization software that embodiments discussed can be executed in is MATLAB®.
According to an embodiment,
At block 305, the computer 100 provides (loads and/or executes) a primal optimization problem over a join of the tables T1 and T2, in which the primal optimization problem includes (equation (8)):
minimizew,b,ξ½∥w∥2+C·Σ(i,j)∈IJξij subject to yixijTw−yib+ζij≧1 ((i,j)∈IJ) ξij≧0 ((i,j)∈IJ)
At block 310, the computer 100 obtains (loads and/or execute) a modified optimization problem from the primal optimization problem, in which the modified optimization problem includes (equation (12):
Minimizew,b,η,ζ½∥w∥2+C·Σi=1mJ(i)·ηi+C·Σj=1nI(j)·ζj subject to yixijTw−yib+ηi+ζj≧1((i,j)∈IJ) ηi,ζj≧0.
At block 315, the computer 100 reduces penalty variables in the modified optimization problem by replacing the penalty variables in a form of ξij for each (i,j)∈IJ with the penalty variables in a form of ξij=ηiζj (as seen in equation (10)).
At block 320, the computer 100 obtains a compact form of the modified optimization problem, in which the compact form (equation (15)) includes:
minimizew,b,η,ζ,σ,τ½∥wP∥2+½∥wU∥2½∥wQ∥2+C·Σi=1mJ(i)·ηi+C·Σj=1nI(j)·ζj subject to yipiTwP−yib+ξi−σk≧0 (i∈Ik, k=1, . . . l) qjTwQ−τk≧0 (j∈Jk, k=1, . . . l) σk+zkTwU+τk≧1 (for k=1, . . . l such that Jk≠) σk+zkTwU≧1 (for k=1, . . . l such that Jk=) ξi≧0 (i=1, . . . , m)
At block 325, the computer 100 solves the compact form of the modified optimization problem, in which the compact form includes auxiliary variables σ1, . . . , σζ and τk for k=1, . . . ζ such that Jk≠. One skilled in the art understands that the computer 100 may include and execute commercial software products (such as MATLAB® software) to solve the computations of the compact form (and any other problems/equations discussed herein).
The present invention may be a system, a method, and/or a computer program product: The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention.
The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire.
Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device.
Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like, and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention.
Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions.
These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions.