An embedding is a mapping of discrete, categorical variables to a vector of continuous numbers. In the context of neural networks, embeddings are learned continuous vector representations of discrete, categorical variables that are useful because that can reduce the dimensionality of such variables and meaningfully represent categories in the transformed space. In a supervised neural network, embeddings form the parameters (weights) of the neural network which are adjusted to minimize loss on a given task. The resulting embedded vectors are representations of categories where similar categories are closer to one another in vector space. For example, it is possible to learn an embedding for each of one million users based on internet click history (e.g., indicating interests and intents) for the user, where each embedding has a large number of dimensions (e.g., 100+ dimensions) and where users with similar interests are characterized by spatially similar embeddings.
Although the use of embeddings reduces the dimensionality of the categorical variables they represent, embeddings are typically still quite large. Performing computations using embeddings can be resource intensive.
A method disclosed herein provides generally for dimensional reduction of correlated vectors. The method includes obtaining first and second embeddings having a known correlation to one another and defining multiple constraints with respect to the first and second embeddings. At least one of the defined constraints preserves the correlation between the first and second embeddings across a transformation that reduces the dimensionality of such embeddings. The method further provides for deriving mapping functions for generating transformed versions of the first embedding and the second embedding based upon the defined constraints. The method further provides for selecting data to output to a user based on one or more computations that utilize the transformed versions of the first embedding and the second embedding.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter. These and various other features and advantages will be apparent from a reading of the following Detailed Description.
When a deep learning neural network is used to generate embeddings to represent discrete variables, the resulting embeddings are correlated in vector space such that more similar variables have embeddings that are closer to one another in the vector space than variables that are less similar. Correlation is measured by computing a similarity metric, such as by computing a dot product between two vectors or by computing a cosine similarity between the vectors.
In some neural networks, embeddings may be co-trained for variables corresponding to different types of objects. For example, a same neural network may generate embeddings representing users and embeddings representing content items the users interact with. Since user embeddings and content item embeddings are generated by co-training a same deep learning model with different types of object data, a correlation exists between each user embedding and each content item embedding, where the strength of the correlation is generally proportional to the likelihood of the user being interested in or acting upon (e.g., clicking on) the content item. Other deep learning models may similarity be co-trained on three of more different object types to generate embeddings that can readily be compared to one another, such as by computing a dot product or cosine similarity between pairs of the embeddings.
In some implementations, machine learning models generate embeddings that are very long, such as a few hundred dimensions. It can be computationally expensive to perform basic computations on embeddings of such size. For example, if a dataset includes embeddings for one million users, identifying users with similar interests may entail computing a dot product between thousands of pairs of embeddings in the dataset, where each dot product itself includes hundreds of dimensions. To reduce the complexity and processing resources consumed by such computations, a common technique is to reduce the dimensionality of embeddings via known techniques that aim to mitigate information loss. As is discussed further below, these known techniques do not preserve correlations between initially-correlated vectors and are therefore not suitable for use in systems that rely on such correlations to perform computations.
Although there exist a number of existing methods for reducing the dimensionality of embeddings, one popular tool is principal component analysis (PCA). PCA decreases the dimensionality of embeddings by creating new variables that are orthogonal with one another in a new coordinate system and that successively maximize variance across the dataset. While PCA and other dimensionality reduction techniques work well in many use cases, these techniques are less useful for cases that depend upon the preservation of vector-to-vector correlations. In cases where two or more embeddings are originally correlated (e.g., they exhibit some measurable degree of similarity due to being generated by the same neural network), the correlation can be diminished when the two vectors are individually subjected to existing dimensionality reduction techniques, such as PCA. Consequently, computations subsequently performed using the reduced-dimensionality embeddings are less accurate than the same computations performed on the embeddings prior to the dimensionality reduction.
The dimensionality reduction technique disclosed herein preserves the correlations between embeddings generated by a same neural network while also mitigating information loss in a manner similar to currently-popular dimensionality reduction techniques, such as PCA. Preserving the degree of correlation between initially-correlated embeddings (e.g., generated by a same neural network) facilitates accurate embedding-based computations at a reduced computational cost.
The neural network 102 is a deep learning network that may assume a variety of different forms in different implementations including that of a graph neural network (GNN), convolutional neural network (CNN), a recurrent neural network (RNN), an artificial neural network (ANN), etc. In one implementation, the neural network 102 is set up with supervised inputs (e.g., an input dataset 112) to train a model to solve a particular a supervised machine learning problem. For example, the neural network 102 may be trained to receive a single word input and output a word that is semantically most similar; alternatively, the neural network 102 may be trained to receive user information identifying watch history or what the user is currently watching and, in response, output a prediction of a movie the user is most likely to watch next, or to complete any other task that can be trained via a supervised learning approach.
The neural network 102 includes at least an embedding generator 106 that receives an input dataset 112 and generates a set of embeddings 118, where each individual embedding in the set corresponds to an object defined in the input dataset 112. For example, the objects may be users, movies, ads, words, etc. Some types of deep learning networks may be co-trained to generate embeddings for multiple different types of objects within a same vector space.
In one implementation, the neural network 102 is a recommender system tasked with making predictions about a user's interests based on the interests of many other users. For example, the system may be trained to recommend movies to users based on a catalog of movie and records of which movies each user has watched in the past. In this example, the embedding generator 106 embeds the movies (e.g., a first type of object type) in a low-dimensional vector space where movies that have been jointly watched by a same user are nearby one other. The embedding generator 106 also creates embeddings for users (e.g., a second object type) in the same space based on the movies those users have watched. In the resulting space, each user embedding is close to the movie embeddings corresponding to the movies that the user has watched. Since nearby users and movies share preferences, this method allows for recommendations to be generated based on the proximity between the embeddings of the different object types.
In the above example, the set of embeddings 118 includes embeddings of different object types (e.g., users and movies) that are correlated in the sense that the embeddings can be compared to one another to measure a similarity between the objects of the different types. This correlation can be readily measured by computing a similarity metric, such as a dot product between two embeddings or a cosine similarity between two embeddings. In these examples, the magnitude of the similarity metric 108 is indicative of the strength of correlation (similarity) between the two embeddings.
By example and without limitation,
In some implementations, the embeddings 120, 122 are long numerical vectors including many (e.g., hundreds) of dimensions. In such cases, computing the similarity metric 108 can consume significant overhead, particularly in cases where the similarity metric 108 is computed many times over, such as with respect to thousands of user embeddings and thousands of content items (e.g., movies, ads, etc.). Some systems that utilize embeddings implement dimensional reduction techniques, such as PCA. However, these techniques do not preserve the degree of correlation between embeddings. For example, the embeddings 120, 122 may initially have a cosine similarity of 0.9, representing a high degree of similarity. When the embeddings are individually subjected to existing dimensionality reduction techniques, this correlation is not preserved and may be higher or lower than 0.9. Consequently, the similarity metric 108 will yield a different (less accurate) value once the embeddings 120, 122 are individually subjected to an existing dimensionality reduction technique. For this reason, existing dimensionality reduction techniques cannot be reliably used in systems that make predictions based on correlations between embeddings.
To address the foregoing, the system 100 includes correlation-preserving dimensionality reducer 104 that accesses pre-computed and stored mapping functions 116 when transforming the embeddings 120, 122 to corresponding reduced-dimensionality embeddings 124, 126. The mapping functions 116 are derived by a mapping function derivation engine 114 that imposes constraints effective to ensure that a defined correlation between the original embeddings 120, 122 is maintained (e.g., substantially the same or satisfying a defined threshold) across the transformation.
In one implementation, the mapping function derivation engine 114 derives a mapping function for each different object type represented within the neural network 102. For example, the derivation of the mapping functions 116 may occur during initial training of the neural network 102 and/or at one or more additional times after the training is complete (e.g., to update the stored functions based on newly-received object data).
The mapping function derivation engine 114 co-derives at least a first mapping function X with respect to embeddings corresponding to a first object type (e.g., users) and a second mapping function Y with respect to embeddings corresponding to a second object type (e.g., content items). The mapping functions X,Y are derived based upon a set of constraints collectively ensuring that a known initial correlation between embeddings corresponding to different object types is preserved during the dimensional reduction of those embeddings.
If, for example, V1 and V2 represent initial embeddings corresponding to object type 1 and object type 2, respectively, the mapping function derivation engine 114 derives mapping functions X and Y to arrive at corresponding reduced-dimension embeddings V1′ and V2′, where V1′=V1(X) and V2′=V2(Y), where the computed similarity metric 108 between V1 and V2 is substantially the same as the computed similarity metric 110 between V1′ and V2′. In different implementations, the mapping function derivation engine 114 may associate different weight terms with each constraint equation when deriving the mapping functions X, Y from the constraint equations. For examples, the weight terms may be selectively varied to ensure tighter or looser preservation of vector-to-vector correlations and/or other relationships for which preservation is desired based upon the design goals and purpose served by the given model.
When the neural network 102 performs a computation that compares embeddings of different object types, the correlation-preserving dimensionality reducer 104 retrieves the stored mapping functions (e.g., X, Y) associated with each different embedding type and generates the reduced-dimensionality embeddings (e.g., 124, 126) for each different embedding that is to be compared. Comparative computations, such as the similarity metric 110, are then computed based on the reduced-dimensionality embeddings (e.g., 124, 126) rather than the lengthy and computationally expensive corresponding original embeddings (e.g., 120, 122). The neural network 102 selects data to output (e.g., the solution to the trained task) based on these computations utilizing the reduced-dimensionality embeddings. This technique significantly reduces processing and power resources consumed by the neural network as compared to systems that perform comparative computations using lengthy original embeddings.
As input, the mapping function derivation engine 200 receives a pair of vectors V1 and V2. In one implementation, the vectors V1, V2 are generated by a same deep learning model and correspond to different types of objects co-trained as embeddings with the same deep learning model. For example, the embedding V1 may correspond to a user while the embedding V2 corresponds to a digital content item that the user may interact with, such as an advertisement.
The end objective of the mapping function derivation engine 200 is to identify mapping functions 116 (X, Y) that are usable to reduce the dimensionality of V1, V2 while preserving a correlation between V1 and V2 as well as other designated vector relationship(s). Thus, the goal is to identify an X and Y such that V1′=V1(X) and V2′=V2(Y), where X and Y satisfy a set of constraints designed ensure preservation of the known correlation between V1, V2 as well as to preserve other relationships and/or information, as discussed further below.
Upon receipt at the mapping function derivation engine 200, the embeddings V1, V2 are input to a preliminary dimensionality reducer 202, which applies an alternative (known) technique for reducing the dimensionality of the embeddings individually. In one implementation, the preliminary dimensionality reducer 202 implements a classical technique, such as principal component analysis (PCA) or linear discriminant analysis (LDA). In other implementations, the preliminary dimensionality reducer 202 implements other technique(s) for dimensionality reduction. For example, a technique for intensity correlation constraint (ICC) may be applied.
The preliminary dimensionality reducer 202 transforms each of the input embeddings V1, V2 to a corresponding preliminary-reduced-dimensionality embedding V1″ and V2.″
In one implementation, the preliminary dimensionality reducer 202 applies principal component analysis (PCA). PCA reduces the dimensionality of each vector V1, V2 while retaining the variation present in each original vector V1, V2 up to the maximum extent. This is achieved by transforming the variables within each vector to a new set of variables known as the principal components (or simply, the PCs—eigenvectors of a covariance matrix), which are orthogonal and that have maximum variance relative to one another. In another implementation, the preliminary dimensionality reducer 202 applies linear discriminant analysis (LDA), which uses statistics to find linear combinations of features that characterize or separate two or more classes in a way that achieves maximum separation between classes and minimum separation within each class. In still other implementations, the preliminary dimensionality reducer 202 applies other classical or non-classical dimensional reduction techniques.
Regardless of the techniques applied by the preliminary dimensionality reducer 202, the computation by the preliminary dimensionality reducer 202 results in a pair of reduced dimensionality vectors V1″, V2″ that are associated with the original vectors V1, V2 by way mapping functions MF1, MF2, where MF1 and MF2 are linear transformation matrices of size (N, M+1) where N is the size of the reduced dimension vector, which is smaller than the original size (R) of V1 and V2. Although MF1 and MF2 may be not known initially, these functions can be readily solved for since V1″=MF1(V1) and V2″=MF2(V2). In one implementation, the mapping functions MF1, MF2 are subsequently used as initial values for the mapping functions 216 (X,Y) derived via the operations discussed below. In other implementations, the mapping functions 116 are initialized using identity matrices.
The original vectors V1, V2 and the preliminary-reduced-dimensionality vectors V1″, V2″ are input to a constraint constructor 204, which constructs a number of constraints 208 representing desired relationships to preserve when mapping V1 and V2 to corresponding reduced-dimensionality vectors V1′, V2.′
Although the constraints 208 imposed may vary from one implementation to another, at least one of the constraints 208 is a correlation constraint requiring that a computed similarity metric between V1, V2 satisfy a threshold similarity with a computed similarity metric between the final reduced dimensionality vectors V1′, V2′. For example, the correlation constraint may be an expression of the form of equation 1 below:
V1′·V2′=(V1·V2) weight=W1 (1)
where W1 is a selected weight, subsequently used by a linearizer 210, defining the strength of correlation that is to be preserved. Here, a weight of ‘1’ would enforce preservation of the correlation to be the same or substantially the same before and after the dimensionality reduction; however, other weights may be suitable in some implementations.
Although the number of constraints defined by the constraint constructor 204 may vary from one implementation to another, it may be easier to arrive at a stable, converging solution to the mapping functions X, Y when at least two or more separate constraints are defined and used in the derivation of X, Y. One optional constraint that may help to obtain a converging solution is to define magnitude constraints that ensure the magnitude of each reduced dimension vector V1′, V2 is similar to that of the original vectors V1, V2. For example, magnitude constraints may assume the form of equations (2) and (3) below:
|V1′|2=|V1|2 weight=W2 (2)
|V2′|2=|V2|2 weight=W3 (3)
where W2 and W3 are selected weights, subsequently used by the linearizer 210, defining the degree by which the intensity is preserved between the original vector and corresponding reduced dimension vector.
Still other implementations may additional or alternatively define one or more constraints that are intended to mitigate information loss between the preliminary approach (e.g., the MF1, MF2 functions applied by the preliminary dimensionality reducer 202) and the modified approach applied via the mapping functions 216. By restricting a vector distance between the final modified vectors V1′, V2′ and the versions of these vectors derived using the preliminary dimension reduction approach V1″, V2″, properties maintained using the preliminary approach(es) may be also maintained in the herein proposed modified approach. For example, distance constraints may assume the form of equations (4) and (5) below:
d(V1′,V1″)=(|V1′−V1″|)=0 weight=W4 (4)
d(V2′,V2″)=(|V2′−V2″|)=0 weight=W5 (5)
where W4 and W5 are selected weights, subsequently used by the linearizer 210, defining the degree by which the distance is preserved between the preliminary dimensionality reduction result and the reduced dimensionality vectors that are to be generated by the mapping functions X,Y. If, for example, the preliminary dimension reduction approach applies PCA, which maximizes variance within each vector, the above-defined distance constraints ensure that the maximum variance property is maintained to some predefined extent in the final reduced dimension vectors V1′ and V2′ relative to corresponding vectors V1″ and V2″ that are derived by the preliminary dimensionality reducer 202.
Equations 4 and 5 above generally function to preserve the same type of information in the final vectors V1′, V2′ as would be preserved in the corresponding preliminary-reduced-vector V1″ or V2″, regardless of the nature of such information. Thus, these distance constraints may be beneficial in implementations where alternative dimensionality reduction techniques are employed instead of PCA (e.g., such as LDA and other techniques).
Notably, the constraint equations 1-5 above can be re-written in terms of the mapping functions X, Y, which represent the unknowns to be solved for. That is, equations 1-5 can be written in the form of equations 6-10 below:
F
1(X,Y)=∥V′1∥=∥V1∥=Σi=1Rv1i2=B1 (6)
F
2(X,Y)=∥V′2∥=∥V2∥=Σi=1Rv2i2=B2 (7)
F
3(X,Y)=V′1·V′2=V1·V2=Σi=1Rv1i·v2i=B3 (8)
F
4(X,Y)=∥V′1−V″1∥=Σi=1N(v1′1−v1″1)2=0=B4 (9)
F
5(X,Y)=∥V′2−V″2∥=Σi=1N(v2′1−v2″1)2=0=B5 (10)
where R is the original dimension of each of V1, V2 and N is the final dimension of V1′ and V2.′ After the constraints 208 are constructed, a linearizer 210 applies partial derivatives to the non-linear constraint equations (e.g., equations 6-10 above) to find linear approximations to each function at a set of points. The linear approximation of each of the constraint equations (6-10) is given generally by re-writing the equation according to the form:
which can be further re-written in the form A·X=B, where:
Here, matrix A is obtained by taking the partial derivatives of the function F with respect to unknowns X and Y. The initial values of each function (X0, Y0) may be set to equal MF1 and MF2, which may be derived following the preliminary dimensional reduction described above with respect to the preliminary dimensionality reducer 202.
By applying linearization to approximate the constraint equations (eq. 6-10) at a series of points (e.g., indices within each of the vector V1, V2), a dataset is derived. A least squares fitting engine 212 uses a least squares approach to solve for X and Y by minimizing the sum of the squares of the residuals made in the results of every single equation at each of the approximated points.
The above approach essentially provides for setting initial values of the mapping functions X, Y to the corresponding preliminary mapping functions MF1 and MF2 or with prespecified values, and then by iteratively modifying X and Y by applying the least squares optimization at each of the data points approximated by the equations. This adjustment continues at each approximated point (e.g., corresponding to an index in V1, V2) until convergence is reached or until the solution satisfies a predefined threshold with respect to one or more (e.g., all) of the predefined constraints.
The above-described operations result in mapping functions X, Y, which are usable to uniformly reduce the dimension of V1, V2 to a preselected number of dimensions while still preserving (1) the known initial correlation between V1 and V2 as well as (2) the magnitude of each of the original vectors V1 and V2; and (3) the variance and/or other types of information preserved when applying the preliminary dimensionality reduction.
In some implementations, the mapping function derivation engine 200 derives multiple different versions of the mapping functions X and Y, each version being tailored to provide a reduced dimensionality vector (V1′, V2′) of a selected size. For example, the mapping functions 116 that are derived and stored may include a first set of mapping functions X1, Y1 that are usable to convert V1, V2 to 2-dimensional vectors; a second set of mapping functions X2, Y2 that are usable to convert V1, V2 to 4-dimensional vectors; a second set of mapping functions X3, Y3 that are usable to convert V1, V2 to 8-dimensional vectors, etc. In this example, the three generated sets of mapping functions X1, Y1, X2, Y2, X3, Y3 are saved for subsequent computations performed on either the vectors V1, V2, or on updated versions of such vectors (e.g., if new object information is received in the interim), or on other vectors (embeddings) of the same variable types co-trained on the same deep learning model.
In the example of
In one implementation, the neural network 302 is a graph neural network (GNN) that initializes user objects and advertisement objects as graph nodes connected by edges representing a determined similarity between the corresponding nodes. For example, embedding generator 308 performs feature extraction and aggregation on the user objects to generate embeddings specific to users and then separately performs feature extraction and aggregation on the advertisement objects to generate embeddings specific to advertisements. Following this, the embedding generator 308 performs joint feature extraction and aggregation on the embeddings that have been created for both users and ads, yielding a third set of embeddings representative of each user and each ad in the same vector space. In this third set of embeddings, a measurable correlation exists between each user embedding and each ad embedding, where the strength of the correlation between the embeddings generally represents the similarity between the two (e.g., the likelihood that the user is to interact with or react positively to the advertisement). This third set of embeddings is corresponds to “embeddings 316” in
A set of the embeddings 316 generated at a first time (e.g., January 2021) are provided to a mapping function derivation engine 318, which uses the embeddings to derive a set of mapping functions 320 usable to reduce the size of each embedding in the set of embeddings 316 while preserving correlations between the embeddings. The set of mapping functions 320 may be stored and used to reduce the dimensionality of versions of the embeddings 316 that are created in the future (e.g., modified after January 2021). For example, the mapping functions 320 may be generated based on a set of embeddings 316 generated in January 2021, stored, and subsequently used to reduce the dimensionality of the embeddings in January 2022, after those embeddings have been updated several times. Operations performed by the mapping function derivation engine 318 may be the same or similar to those described above with respect to
Given a neural network trained with two types of objects—e.g., user objects and ad objects—the mapping function derivation engine 318 co-derives at least a pair of mapping functions (X,Y), where the first mapping function of the pair (e.g., X) is usable to transform the embeddings corresponding to the first object type and where the second mapping function of the pair (e.g., Y) is usable to transform the embeddings of the second object type. Some implementations of the mapping function derivation engine 318 may derive multiple pairs of mapping functions (e.g., [X1, Y1], [X2, Y2], [X3, Y3]), where each pair provides a dimensionality reduction of different magnitude on a pair of correlated vectors corresponding to the first and second different object types. For example, [X1, Y1] may be applied to transform each of a user embedding and an ad embedding from 128 dimensions to 2 dimensions, while [X2, Y2] may be applied to transform the user embedding and the ad embedding from 128 dimensions to 4 dimensions.
Each co-derived stored pair (X, Y) of mapping functions 320 is usable to reduce the dimensionality of any pair of embeddings co-trained on the neural network, where the embeddings of the pair correspond to different object types. The resulting transformed embeddings have an equal number of dimensions and are correlated with one another by a predefined degree relative to the correlation existing within the corresponding the original (non-transformed) embeddings. In one implementation, the correlation between the transformed embeddings is the same or substantially the same as the correlation between the original (non-transformed) embeddings.
A similarity predictor 322 performs the task of predicting which ad in the advertisement database is most likely to appeal to a selected user (e.g., User 1). This operation is performed by measuring a similarity metric between an embedding for the user (e.g., V1) and an embedding for each ad in the database (e.g., V2, V3, V4 . . . ) For example, the similarity metric for a user and a single ad may be computed by taking the dot product between the embedding for the user and the embedding for the ad. Notably, the embeddings utilized in this computation may be different from those corresponding embeddings generated from the input dataset 312. For example, the embeddings for users may be periodically updated based on new user information (e.g., new digital content items that the user has interacted with). These updated embeddings may be transformed using mapping functions 320 that are derived and stored at an earlier time, such as during training of the neural network.
To reduce the computational complexity of computing a similarity metric for a pair of the embeddings 315, the similarity predictor 322 includes a correlation-preserving dimensionality reducer 324. The correlation-preserving dimensionality reducer selects a pair [X,Y] of the mapping functions 320 that have been previously co-derived and stored by the mapping function derivation engine 318. The similarity predictor 322 uses the selected mapping function pair to reduce the dimensionality of select user/ad embedding pairs.
For each of multiple user/ad embedding pairs, the correlation-preserving dimensionality reducer 324 generates a transformed pair of correlated embeddings of reduced dimensionality, and the similarity predictor 322 computes the similarity metric for the pair. For example, reduced dimension embeddings may be dynamically generated for a user (user_1) and each of 5000 advertisements. The similarity predictor 322 may take the dot product between the reduced dimension user embedding and each one of the reduced dimension ad embeddings to identify which ad embedding has a greatest similarity to the user embedding. The similarity predictor 322 identifies and selects the ad embedding that is most similar to the user embedding and outputs information about the corresponding ad in the form of a prediction 326. For example, the prediction may indicate that the user is most likely to click on the selected ad. In another implementation, the selected ad is automatically provided as an input to a system that presents advertisements to the user.
In some implementations, the correlation-preserving dimensionality reducer 324 interfaces with a graphical user interface that includes a setting allowing a system administrator to select the size of reduced dimensionality embeddings that are to be created and used in the computations performed by the similarity predictor. Depending on processing and power resource availability, the system administrator may toggle the setting to provide for smaller (e.g., 2 dimension) or larger (e.g., 6 dimension) embeddings in this set of computations. The correlation-preserving dimensionality reducer 324 retrieves a select pair of the mapping functions 320 that is associated with the selected dimension setting.
The above example provided with respect to
A constraint construction operation 406 defines a set of constraints, where each defined constraint represents a desired relationship that is to be preserved during an alternative (non-preliminary) dimensionality reduction of the original vectors V1, V2. According to one implementation, the constraint construction operation 406 defines at least one constraint that ensures a predefined correlation is maintained between the original vectors V1 and V2 after the transformation. One or more constraints may also be defined to preserve magnitude of the individual vectors across the transformation. In some implementations, additional constraints are defined to preserve variance or classification information that would be generated during the preliminary dimension reduction technique. For example, one or more constraints may be defined restricted a distance between V1′ (the final reduced embedding) and V1″ (the reduced embedding yielded by preliminary dimensionality reduction technique applied in operation 402).
An approximation operation 408 uses the constraint equations to approximate transformation functions X, Y at a set of points. This approximation may be achieved using various known techniques, such as by linearizing the constraint equations by taking partial derivatives of each of the constraint equations with respect to transformation functions X, Y. In such an implementation, the transformation functions MF1 and MF2 associated with the preliminary dimension reduction operation 402 may be used as initial values of X, Y, and the indices of the multi-dimensional embeddings V1, V2 may provide a set of points (e.g., V1i, V2i, where i=0 through i=R−1 and R is the original embedding size) for approximating each of the linearized constraint equations.
A fitting operation 410 solves for functions X and Y that best fit the set of points generated by the approximation operation 408. For example, the fitting operation 410 may utilize a least square optimization technique that minimizes the sum of the squares of the residuals for each data point approximated by the linearized constraint equations.
A storing operation 412 stores the transformation functions X, Y for future use. Subsequently, a receiving operation 414 receives a set of correlated vectors associated with the stored mapping functions. For example, the receiving operation 414 may receive an updated set of the vectors that were initially used to derive the transformation functions X, Y. Alternatively, the received set of correlated vectors may include other vectors created by the same deep learning model but not included in the initial set of vectors used to derive the transformation functions. A computation function 416 uses the stored transformation functions to reduce the dimensionality of each vector in the newly-received vector set.
One or more applications 540, such as the embedding generator 106 or 308, mapping function derivation engine 114 or 200, correlation-preserving dimensionality reducer 104 or 324, and similarity predictor 322 are loaded in the memory 504 and executed on the operating system 510 by one or more of the processors 502. Applications 540 may receive input from various input local devices (not shown) such as a microphone, keypad, mouse, stylus, touchpad, joystick, etc. Additionally, the applications 540 may receive input from one or more remote devices, such as remotely-located smart devices, by communicating with such devices over a wired or wireless network using more communication transceivers 530 and an antenna 532 to provide network connectivity (e.g., a mobile phone network, Wi-Fi®, Bluetooth®). The processing device 500 further includes storage 520 and a power supply 516, which is powered by one or more batteries and/or other power sources and which provides power to other components of the processing device 500. The power supply 516 may also be connected to an external power source (not shown) that overrides or recharges the built-in batteries or other power sources.
The processing device 500 may include a variety of tangible computer-readable storage media and intangible computer-readable communication signals. Tangible computer-readable storage can be embodied by any available media that can be accessed by the processing device 500 and includes both volatile and nonvolatile storage media, removable and non-removable storage media. Tangible computer-readable storage media excludes intangible and transitory communications signals and includes volatile and nonvolatile, removable and non-removable storage media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Tangible computer-readable storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CDROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other tangible medium which can be used to store the desired information, and which can be accessed by the processing device 900. In contrast to tangible computer-readable storage media, intangible computer-readable communication signals may embody computer readable instructions, data structures, program modules or other data resident in a modulated data signal, such as a carrier wave or other signal transport mechanism. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, intangible communication signals include wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media.
Some implementations may comprise an article of manufacture. An article of manufacture may comprise a tangible storage medium (a memory device) to store logic. Examples of a storage medium may include one or more types of processor-readable storage media capable of storing electronic data, including volatile memory or non-volatile memory, removable or non-removable memory, erasable or non-erasable memory, writeable or re-writeable memory, and so forth. Examples of the logic may include various software elements, such as software components, programs, applications, computer programs, application programs, system programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, operation segments, methods, procedures, software interfaces, application program interfaces (API), instruction sets, computing code, computer code, code segments, computer code segments, words, values, symbols, or any combination thereof. In one implementation, for example, an article of manufacture may store executable computer program instructions that, when executed by a computer, cause the computer to perform methods and/or operations in accordance with the described implementations. The executable computer program instructions may include any suitable type of code, such as source code, compiled code, interpreted code, executable code, static code, dynamic code, and the like. The executable computer program instructions may be implemented according to a predefined computer language, manner or syntax, for instructing a computer to perform a certain operation segment. The instructions may be implemented using any suitable high-level, low-level, object-oriented, visual, compiled and/or interpreted programming language.
The following summary provides a non-exhaustive set of illustrative examples of the technology set forth herein.
(A1) According to a first aspect, some implementations include a method (e.g.,
The method of A1 is advantageous because it allows for a reduction in the processing resources needed to compare correlated vectors. For example, instead of taking a dot product of two long vectors (e.g., 100+ dimensions), transformation functions can be retrieved and used to dynamically reduce the two vectors to a small number of dimensions (e.g., 2 dimensions or 4 dimensions). The dot product computation is then dramatically simplified (reducing overhead); yet, the correlation between the vectors is preserved in the dimensionality reduction such that the comparison is as accurate as if it were performed on the full, non-reduced dimensionality vectors.
(A2) In some implementations of A1, obtaining the first vector and the second vector includes obtaining a first embedding and a second embedding from a collection of embeddings co-trained by a same deep learning model.
(A3) In some implementations of A2, the first embedding corresponds to a first object type and the second embedding corresponds to a second object type and the method further comprises using the derived mapping functions to generate multiple reduced dimensionality embeddings of the first object type and multiple reduced dimensionality embeddings of the second object type.
(A4) In some implementations of A2, or A3, the method further includes computing a similarity metric (
(A5) In some implementations of A2, A3, or A4, the method further includes storing the mapping functions; obtaining updated embeddings generated by the same deep learning model; and using the stored mapping functions to reduce dimensionality of the updated embeddings.
The method of A5 is advantageous because it allows a pre-computed set of transformation functions to be reused to reduce the dimensionality of a set of embeddings even as those embeddings are updated over time without regenerating the set of transformation functions (thereby, reducing computational overhead).
(A6) In some implementations of A1, A2, A3, A4, or A5, deriving the mapping functions (
The method of A6 is advantageous because it may allow a client utilizing the transformation functions to specify a select degree of dimensionality reduction that is desired according to the client's respective computing platform and operations.
(A7) In some implementations of A1, A2, A3, A4, A5, or A6, the method further includes receiving input from a user identifying one or more select mapping functions of the multiple mapping functions; and generating transformed versions of the first vector and the second vector using the one or more select mapping functions.
(A8) In some implementations of A1, A2, A3, A4, A5, A6, or A7, the multiple constraints further include at least one constraint preserving a magnitude of an individual vector before and after being subjected to a dimensionality reduction using one of the derived mapping functions.
The method of A8 is advantageous at least because the use of a secondary constraint facilitates mathematical convergence when solving for the mapping functions.
In another aspect, some implementations include a computing system (e.g.,
In yet another aspect, some implementations include a computer-readable storage medium for storing computer-readable instructions. The computer-readable instructions, when executed by one or more hardware processors, perform any of the methods described herein (e.g., methods A1-A6).
The implementations described herein are implemented as logical steps in one or more computer systems. The logical operations may be implemented (1) as a sequence of processor-implemented steps executing in one or more computer systems and (2) as interconnected machine or circuit modules within one or more computer systems. The implementation is a matter of choice, dependent on the performance requirements of the computer system being utilized. Accordingly, the logical operations making up the implementations described herein are referred to variously as operations, steps, objects, or modules. Furthermore, it should be understood that logical operations may be performed in any order, unless explicitly claimed otherwise or a specific order is inherently necessitated by the claim language. The above specification, examples, and data, together with the attached appendices, provide a complete description of the structure and use of exemplary implementations.