Embedding Normalization Method and Electronic Device Using Same

Information

  • Patent Application
  • 20220207368
  • Publication Number
    20220207368
  • Date Filed
    December 29, 2021
    3 years ago
  • Date Published
    June 30, 2022
    2 years ago
Abstract
A method of training a neural network model for predicting a click-through rate (CTR) of a user in an electronic device includes normalizing an embedding vector on the basis of a feature-wise linear transformation parameter, and inputting the normalized embedding vector into a neural network layer, wherein the feature-wise linear transformation parameter is defined such that the same value is applied to all elements of the embedding vector.
Description
CROSS-REFERENCE TO RELATED APPLICATION(S)

This application claims the benefit of Korean patent Application No. 10-2021-0158515 filed Nov. 17, 2021 in the Korean Intellectual Property Office, and Korean Patent Application No. 10-2020-0188326 filed on Dec. 30, 2020 in the Korean Intellectual Property Office, the disclosure of which is incorporated herein by reference.


TECHNICAL FIELD

The present disclosure relates to an embedding normalization method and an electronic device using the same. More specifically, the present disclosure relates to a method for training a neural network model while preserving importance of a feature vector.


DESCRIPTION OF THE RELATED ART

Artificial intelligence (AI) is being used in various industries. The AI, which operates in a manner similar to human thinking, may be utilized to extract features of objects that can be used to learn (or approach) an objective.


Recently, research has been conducted to confirm the interest on a specific object (e.g., an advertisement, article, etc.) through a neural network model. When interest in a specific object is confirmed, a user may be guided to a destination (e.g., using a neural network) based on the interest. Analysis on the interest may be improved by controlling various stages in training of the neural network model, and a method of assigning a high level of importance on main interest and/or extracting the main interest may be utilized.


Technical Goal

Learning interactions between feature vectors with respect to a specific target (or objective) may be a fundamental matter in click-through rate (CTR) prediction. Factorization machines (FM), which simultaneously considers primary and secondary feature interactions, are an example of a model used in CTR prediction. Primary feature interactions may refer to interactions with an individual feature itself, and secondary feature interactions may refer to pair-wise interactions between features. For example, attentional factorization machines (AFM) may utilize an attention mechanism to automatically capture the importance of the feature interaction. Recently, neural factorization machines (NFM), Wide&Deep, DeepFM, xDeepFM, product-based neural networks (PNN), automatic feature interaction learning (AutoInt), and AFN have been utilized to model high-level feature interaction through a deep neural network.


Recently, various attempts have been made to apply a normalization method to the performance of the CTR prediction. NFM, AFN, and AutoFIS may reliably perform element training of the deep neural network by utilizing a batch normalization (BN) method. In order to train a CTR prediction model, PNN and MINA utilize a layer normalization (LN) method.


Such normalization methods do not preserve the importance of feature embedding, but rather are focused on the stability of the element training of the deep neural network. That is, when normalization for individual elements is performed, the BN and the LN do not reflect a weight for importance because they use a constant parameter in the same dimension. When a result value after normalization does not reflect the importance, the accuracy of prediction of the CTR may be negatively affected.


Technical Solutions

According to an aspect there is provided a method of training a neural network model for predicting a CTR of a user in an electronic device, which includes normalizing an embedding vector on the basis of a feature-wise linear transformation parameter, and inputting the normalized embedding vector into a neural network layer. The feature-wise linear transformation parameter may be defined such that the same value is applied to all elements of the embedding vector.


The normalizing may include calculating a mean of the elements of the embedding vector, calculating a variance of the elements of the embedding vector, and normalizing the embedding vector on the basis of the mean, the variance, and the feature-wise linear transformation parameter.


The feature-wise linear transformation parameter may include a scale parameter and a shift parameter.


Each of the scale parameter and the shift parameter may be a vector having the same dimension as the embedding vector, and all elements thereof may have the same value.


Each of the scale parameter and the shift parameter may have a scalar value.


According to an aspect there is provided a neural network system for predicting a CTR of a user implemented by at least one electronic device, which includes an embedding layer, a normalization layer, and a neural network layer.


The embedding layer may map one or more features included in a feature vector to an embedding vector, the normalization layer may normalize the embedding vector on the basis of a feature-wise linear transformation parameter, the neural network layer may perform a neural network operation on the basis of the normalized embedding vector, and the feature-wise linear transformation parameter may be defined such that the same value is applied to all elements of the embedding vector.


The normalization layer may calculate a mean of the elements of the embedding vector, calculate a variance of the elements of the embedding vector, and normalize the embedding vector on the basis of the mean, the variance, and the feature-wise linear transformation parameter.


The feature-wise linear transformation parameter may include a scale parameter and a shift parameter.


Each of the scale parameter and the shift parameter may be a vector having the same dimension as the embedding vector, and all elements thereof may have the same value.


Each of the scale parameter and the shift parameter may have a scalar value.


Effects

Unlike a variance-only layer normalization (VO-LN) which is being studied to overcome a limitation, the embedding normalization (EN) method according to various embodiments of the present disclosure may perform calculations on the basis of parameters that reflect weights for feature vectors. Weights in accordance with a number of embodiments of the invention can be calculated using attention models appropriate to a particular application. Values normalized on the basis of parameters calculated by the EN may be widely used in model classes of various neural networks (for example, a deep neural network, a shallow neural network, etc.) and may reflect a weight for importance.


In accordance with the EN method according to various embodiments of the present disclosure, the electronic device may preserve the importance of the feature vector to improve performance of a CTR prediction model. A BN method or an LN method may excessively equalize a norm of feature embedding and potentially harm performance of a model.


However, the EN method of the present disclosure may explicitly model a norm of individual feature embeddings to rapidly converge and reflect the importance of individual features, thereby increasing accuracy of the CTR.





BRIEF DESCRIPTION OF THE DRAWINGS

These and/or other aspects, features, and advantages of the invention will become apparent and more readily appreciated from the following description of example embodiments, taken in conjunction with the accompanying drawings of which:



FIG. 1 is a schematic block diagram illustrating a feature of an electronic device according to various example embodiments of the present disclosure;



FIG. 2 is a schematic flowchart illustrating an embedding normalization method according to various example embodiments of the present disclosure;



FIG. 3 is a schematic diagram illustrating a structure of a neural network training model including an embedding normalization (EN) layer according to various example embodiments of the present disclosure;



FIG. 4 is an example diagram illustrating performance of EN on a feature vector according to various example embodiments of the present disclosure;



FIG. 5 is a schematic diagram illustrating a structure of the performance of the EN on the feature vector according to the various example embodiments of the present disclosure; and



FIG. 6 is a detailed example diagram illustrating normalized values according to execution of an EN process of FIG. 4.





DETAILED DESCRIPTION

The terms used in the example embodiments are selected, as much as possible, from general terms that are widely used at present while taking into consideration the functions obtained in accordance with the present disclosure, but these terms may be replaced by other terms based on intentions of those skilled in the art, customs, emergence of new technologies, or the like. Also, in a particular case, terms that are arbitrarily selected by the applicant of the present disclosure may be used. In this case, the meanings of these terms may be described in corresponding description parts of the disclosure. Accordingly, it should be noted that the terms used herein should be construed based on practical meanings thereof and the whole content of this specification, rather than being simply construed based on names of the terms.


Throughout this disclosure, when an element is referred to as “comprising” a component, it refers that the element can further include other components, not excluding the other components unless specifically stated otherwise. In addition, the term “˜part”, “˜module,” or the like disclosed herein means a unit for processing at least one function or operation, and this unit may be implemented by hardware, software, or a combination of hardware and software.


The expression “at least one of A, B, and C” may include the following meanings: A alone; B alone; C alone; both A and B together; both A and C together; both B and C together; and all three of A, B, and C together.


A “terminal” referred below may be implemented as a computer or a portable terminal, which is capable of accessing a server or another terminal through a network. Here, the computer includes, for example, a notebook, a desktop, and a laptop, on which a web browser is installed, and the portable terminal is, for example, a wireless communication device which secures portability and mobility and may include terminals based on communication such as international mobile telecommunication (IMT), code division multiple access (CDMA), wide-code division multiple access (W-CDMA), and long term evolution (LTE), and wireless communication devices based on all types of handheld devices such as smartphones and tablet personal computers (PCs).


Example embodiments of the present disclosure will be fully described in a detail below which is suitable for implementation by those skilled in the art with reference to the accompanying drawings. However, the present disclosure may be implemented in various different forms, and thus it is not limited to embodiments to be described herein.


Hereinafter, example embodiments of the present disclosure will be described in detail with reference to the accompanying drawings.


In describing the example embodiments, descriptions of technical contents which are well known in the technical field to which the present disclosure pertains and are not directly related to the present disclosure will be omitted herein. This is to more clearly convey the gist of the present disclosure by omitting unnecessary descriptions.


For the same reason, some components in the accompanying drawings may be exaggerated, omitted, or schematically illustrated. In addition, a size of each component does not fully reflect an actual size. In each of the accompanying drawings, the same or corresponding components are assigned the same reference numerals.


The advantages and features of the present disclosure and the manner of achieving the advantages and features will become apparent with reference to the embodiments described in detail below with the accompanying drawings. The present disclosure may, however, be implemented in many different forms and should not be construed as being limited to the embodiments set forth herein, and the embodiments are provided such that this disclosure will be thorough and complete and will fully convey the scope of the present disclosure to those skilled in the art, and the present disclosure is defined by merely the scope of the appended claims.


In this case, it will be understood that each block of flowchart diagrams and combinations of the flowchart diagrams may be performed by computer program instructions. These computer program instructions may be embodied in a processor of a general purpose computer, a special purpose computer, or other programmable data processing equipment such that the instructions performed by the processor of the computer or other programmable data processing equipment generate parts for performing functions described in flowchart block(s).


These computer program instructions may use a computer or other programmable data processing equipment for implementing a function in a specific manner or may be stored in a computer readable memory, and thus the instructions which use the computer or are stored in the computer readable memory may produce a manufacturing article including instruction parts for performing the functions described in the flowchart block(s). Since the computer program instructions can also be embedded in the computer or other programmable data processing equipment, instructions, which a series of operations are performed on the computer or other programmable data processing equipment to generate a computer-executed process, thereby operating the computer or other programmable data processing equipment, can provide operations for performing the functions described in the flowchart block(s).


In addition, each block may represent a module, segment, or a portion of a code, which includes one or more executable instructions for executing specified logical function(s). It should also be noted that, in some alternative embodiments, it is also possible for the functions mentioned in the blocks to occur out of the order. For example, two blocks shown in succession can be substantially performed simultaneously or, in sometimes, the two blocks can be performed in the reverse order according to corresponding functions.


Artificial intelligence (AI) may be a type of computer program which mimics human intelligence through a series of logical algorithms which think, learn, and determine like humans. The AI may process complicated operations in a processor corresponding to the human brain through a neural network which resembles the human nervous system. Machine learning capable of being utilized in deep learning processes and a process of normalizing and modeling a feature through another learning will be described herein. In this disclosure, the terms machine learning and machine training may be used interchangeably.


Neural networks may refer to networks in which an operation principle of a neuron, which is a basic unit of the human nervous system, and a connection relationship between neurons are modeled. Neural networks may be data processing systems in which individual nodes or processing elements are connected in the form of layers. Neural networks may include a plurality of layers, and each layer may include a plurality of neurons. In addition, neural networks may include synapses corresponding to a nerve stimulator capable of transmitting data between the neurons. In this disclosure, the terms layer and class may be used interchangeably.


Specifically, neural networks may generally refer to data processing models in which artificial neurons vary bonding strengths of the synapses through repetitive learning to have the ability of solving a given matter or a matter in which a variable occurs. In this disclosure, the terms neural network and artificial neural network may be used interchangeably.


Neural networks may be trained using training data. Specifically, the training may include a process of determining feature-wise linear transformation parameters of the neural network using feature data so as to achieve objectives such as (but not limited to) classification, regression analysis, and clustering of input data. More specifically, determining feature-wise linear transformation parameters may include (but is not limited to) determining a set of one or more weights and/or a set of one or more biases.


Neural networks may be trained using input data to classify or cluster the input data according to a pattern, the trained neural network may be referred to as a trained model. Specifically, training methods may be classified into supervised learning, non-supervised learning, semi-supervised learning, and/or reinforced learning. More specifically, supervised learning may be a method, which is a type of machine learning, of inferring functions from the training data. Outputting a continuous result value among the functions inferred through the machine learning may be a regression analysis, and outputting a result value by predicting a class of input data may be a classification.


In supervised learning, the training data may include labels, and the labels may include a meaningful result value which neural networks can infer. Specifically, the result value which the neural network should infer may be label data. More specifically, the training data may include label data corresponding to the training data, and the neural network may acquire an input value and a label from the training data during training.


Training data may include a plurality of feature vectors, and neural networks may infer the training data and identify a label for an individual feature vector to output the label data as the result value. Neural networks may infer a function with respect to correlation between pieces of data through the training data and the label data. In addition, a parameter with respect to the individual vector may be optimized through feedback on the function inferred from the neural network.



FIG. 1 is a schematic block diagram illustrating a feature of an electronic device according to various example embodiments of the present disclosure.


The electronic device may include a device including a neural network. The electronic device is a device capable of performing machine learning using training data and may include a device capable of performing learning using a model that includes a neural network. For example, the electronic device may be configured to receive, classify, store, and output data to be used for various processes, such as (but not limited to) data mining, data analysis, intelligent decision making, and/or a machine learning algorithm.


The electronic device may include various devices for training the neural network. For example, the electronic device may be implemented as a plurality of server sets, a cloud server, or a combination thereof. Specifically, the electronic device may acquire a result value from data analysis or training through variance processing.


Referring to FIG. 1, the electronic device may include a processor 110, an input/output (I/O) 120, and a memory 130 as components. The components of the electronic device shown in FIG. 1 are not limited thereto and may be added or replaced. The processor 110 may control or predict an operation of the electronic device through data analysis and a machine learning algorithm. The processor 110 may request, retrieve, receive, or utilize data and control the electronic device to execute a desired operation which is learned through training.


The processor 110 may be configured to derive and detect a result value with respect to an input value on the basis of a user input or a natural language input. The processor 110 may be configured to collect data for processing and storage. The collection of the data may include (but is not limited to) sensing of data through a sensor, extracting of data stored in the memory 130, or receiving of data from an external device through the I/O interface 120.


The processor 110 may convert an operation history of the electronic device into data and store the data in the memory 130. The processor 110 may acquire the best result value for performing a specific operation on the basis of the stored operation history data and a trained model.


When the specific operation is performed, the processor 110 may analyze a history according to execution of the specific operation through data analysis and a machine learning algorithm. Specifically, the processor 110 may update previously trained data on the basis of the analyzed history. That is, the processor 110 may improve accuracy of the data analysis and the machine learning algorithm on the basis of the updated data.


The I/O interface 120 may perform a function of transmitting data stored in the memory 130 of the electronic device or data processed by the processor 110 to another device or a function of receiving data from another device to the electronic device.


The processor 110 may train (for example, learning) the neural network using the training data or the training data set. For example, the processor 110 may train the neural network through data obtained by preprocessing the acquired input value. As another example, the processor 110 may train the neural network through preprocessed data stored in the memory 130. Specifically, the processor 110 may determine an optimization model of the neural network and parameters used for optimization by repeatedly training the neural network using various training methods.


The memory 130 may store a model trained by the processor 110 or the neural network. For example, the memory 130 may distinguish and store the trained model from the training model. Specifically, the memory 130 may store models in a process in which the neural network is trained to store a trained model according to the training history. In addition, the memory 130 may store a model in which the trained model is updated.


The memory 130 may store input data which includes input values, training data for model training, and model training history data. The input data stored in the memory 130 may include processed data suitable for training models and/or raw data which is not processed.


According to an example embodiment, normalization of the neural network model learned by the processor 110 may be performed by a method of batch normalization (BN), a method of layer normalization (LN), a method of embedding normalization (EN), or a method using a combination of the above methods. Normalization in accordance with a number of embodiments of the invention may include processing data using parameters, distributing an individual feature vector corresponding to a characteristic of individual data, and/or normalizing the individual feature vector through a mean and a parameter (for example, a first parameter or a second parameter), and will be described in greater detail herein. In particular, normalization described herein may be referred to as embedding normalization (EN).


According to an example embodiment, normalization may derive a result value capable of extracting characteristics of features from the feature vector more quickly, and in particular, the normalization may be utilized to train a model for predicting a click-through rate (CTR). The CTR may refer to a probability in which a user may click on a feature of interest.


When prediction of the CTR is performed using a BN method or an LN method, accuracy of the prediction may be degraded. EN in accordance with a number of embodiments of the invention may stably train the neural network model and increase the prediction accuracy of the CTR. More specifically, training a model in accordance with a variety of embodiments of the invention may increase the speed and efficiency at which the training of the model converges, while maintaining the importance of each feature during a learning process and increasing stability according to repeated learning. In addition, according to the present disclosure, since the importance of the feature vector is preserved during the learning process, performance improvement may be achieved in a deep learning model, as well as in CTR prediction models without neural elements, such as (but not limited to) factorization machines (FM) and field-aware factorization machines (FFM).



FIG. 2 is a schematic flowchart illustrating an EN method according to various example embodiments of the present disclosure.


In operation S210, the electronic device may receive first training data. The training data may include a set of one or more feature vectors. Feature vectors may include multiple features. The electronic device may acquire feature vectors as input data for training a neural network model. In numerous embodiments, each feature of a feature vector may include an element for a user and an element for an item. For example, in the feature, the element for the user may indicate an age of the user, gender thereof, a platform access time, a click log in a platform, and/or a platform usage history. The platform may be an online platform accessed by the user. As another example, the element for the item may include a type of content in the platform, a feature of the content, and/or an arrangement region of the content. The content may be a notice posted in the platform. The feature vector may be noted as x={x1, x2, x3, . . . xn} (n is a natural number), and elements x1 to xn of the feature vector may correspond to the features.


According to an example embodiment, the electronic device may map individual features to the embedding vector. The mapping may refer to performing of embedding for each feature. The mapped embedding vector may be mapped to e1, e2, e3, e4, . . . , en by corresponding to each feature. For example, the electronic device may perform an embedding lookup on the features x1 to xn included in the feature vector x to map the features x1 to xn to the corresponding embedding vectors e1 to en. In this case, each of the embedding vectors e1 to en may be a vector in d dimensions (where d is a natural number).


According to various embodiments, the electronic device may map a first element of the feature vector to a first embedding vector and map a second element of the feature vector to a second embedding vector. That is, n features included in the feature vector may be mapped to the first to nth embedding vectors, respectively. The embedding vector to which each feature (for example, an ith element) of the feature vector is mapped may be referred to as an ith embedding vector.


The embedding vector may be a learnable parameter. As learning of the neural network model proceeds, the embedding vector may be learned such that the neural network model may perform its intended purpose.


In operation S220, the electronic device may acquire an embedding matrix by embedding feature vectors. The electronic device may generate an input embedding matrix by embedding feature vectors. For example, the input embedding matrix may include a result of embedding a number of feature vectors corresponding to a batch size. That is, when the feature vector is n dimensions, the embedding vector is d dimensions, and the batch size is b, the input matrix may have a size of b*n*d. Although this example describes using an embedding matrix with a batch of feature vectors, one skilled in the art will recognize that similar systems and processes can be applied for individual feature vectors (i.e., batch size 1).


In operation S230, the electronic device may normalize the embedding vector on the basis of the embedding matrix. The electronic device may output an embedding matrix E consisting of embedding vectors through embedding on the feature vectors. The embedding matrix E may be an input matrix to a normalization process in accordance with numerous embodiments of the invention.


According to an example embodiment, the normalization may be performed on each embedding vector of an input matrix. The electronic device may perform the normalization such that importance (for example, a norm) of each embedding vector is preserved. In the related art, a magnitude (norm) of each embedding vector tends to be equalized due to the normalization so that the matters such as weight loss (gradient vanishing) or weight exploding (gradient exploding) may occur. However, the electronic device of the present disclosure may prevent such matters from occurring.


According to an example embodiment, the electronic device may perform the normalization on the embedding vector on the basis of a linear transformation parameter. The linear transformation parameter may include a scale parameter and/or a shift parameter. Each of the scale parameter and the shift parameter may be expressed as a vector having the same dimensions as the embedding vector. In some embodiments, all elements of the scale parameter and/or the shift parameter may have the same value. When the normalization is performed, the electronic device may apply the scale parameter and the shift parameter to each element of the embedding vector.


According to an example embodiment, the EN may apply the scale parameter and the shift parameter which have the same value corresponding to the same dimension to all the elements of the individual embedding vectors. That is, the electronic device may equally apply linear transformation parameter values to all the elements of the embedding vector at a given index of the embedding matrix. This may be different from setting a dimension-wise parameter when layer normalization is performed.


According to an example embodiment, systems and processes for EN in accordance with the present disclosure may define linear transformation parameters such that the same scale parameter value and the same shift parameter value are applied to all elements of one embedding vector so that the normalization may be performed to preserve importance of the embedding vector. That is, the electronic device may perform the normalization on each embedding vector on the basis of the feature-wise linear transformation parameter. The feature-wise linear transformation parameter may include a scale parameter and/or a shift parameter. The electronic device may perform the normalization on the basis of the scale parameter and/or the shift parameter which may each be defined as the same scalar value for all the elements of the embedding vector.



FIG. 3 is a schematic diagram illustrating a structure of a neural network model including an EN layer according to various example embodiments of the present disclosure. FIG. 3 illustrates a process in which a feature vector 310 is input to the neural network model and a result value is output.


According to an example embodiment, the neural network system may include a plurality of neural network layers arranged in a sequence from the lowest layer to the highest layer in the sequence. The neural network system may generate neural network outputs from neural network inputs by processing the neural network inputs through respective layers of a sequence. The neural network system may receive input data and generate a score for the input data. For example, when the input data of the neural network system includes a feature corresponding to a feature extracted from content, the neural network system may output a score for each object category. The score for each object category may indicate a selection probability in which the content includes a feature of an object included in a category. As another example, when the input data with respect to the neural network system is a feature of an image for an article in a specific platform, the neural network system may indicate a probability that the image for the article in the specific platform is selected.


According to an example embodiment, the neural network model may receive the feature vector 310 as input data and output a result value through a first neural network layer (e.g., an embedding layer) 320, an embedding normalization layer 330, and a second neural network layer 340. In this case, the feature vector 310 may be an output of a lower layer not shown in FIG. 3, and the neural network layer 340 may include a plurality of neural network layers.


The embedding layer 320 may perform the embedding on the feature vector 310. The embedding layer 320 may generate an input matrix including the embedding vectors for the features of the feature vector by mapping each feature included in the feature vector 310 to the embedding vector. The embedding operation of the embedding layer 320 may be performed in the same manner as the embedding operation described in operation S220 of FIG. 2.


The normalization layer 330 may perform the normalization on the embedding vector included in the input matrix. According to an example embodiment, the normalization may be performed according to Equation 1 below.











e
x

^

=



γ
x



(



e
x

-

μ
x





σ
x
2

+
ϵ



)


+

β
x






[

Equation





1

]







In Equation 1, ex denotes an embedding vector corresponding to a feature x included in the feature vector 310, and custom-character may be a normalized embedding corresponding to the feature x. ⊙ may refer to an element-wise multiplication operation. ε is a relatively small scalar value and may be added to the variance to prevent overflow when the normalization is performed.


In Equation 1, μx and σx2 may be a mean and a variance of elements of the embedding vector ex, respectively. μx and σx2 may be calculated as in Equations 2 to 5, respectively. In Equations 2 to 5, (ex)k denotes a kth element of the embedding vector ex, and d denotes a dimension of the embedding vector ex(that is, a number of elements).










μ
x

=


1
d





k




(

e
x

)

k







[

Equation





2

]







σ
x
2

=


1
d





k





(



(

e
x

)

k

-

μ
x


)

2

.







[

Equation





3

]







μ
x

=

1
·

μ
x






[

Equation





4

]







σ
x
2

=

1
·


σ
x
2

.






[

Equation





5

]







In Equation 1, γx and βx may be linear transform parameters. γx and βx may be a scale parameter and a shift parameter, respectively. γx and βx may be set as shown in Equations 6 and 7 below, respectively, and γx and βx may be learnable parameters. That is, each of the parameters γx and βx to be learned is a vector having the same dimension as the embedding vector, and all elements thereof may be learned parameters of γxf and βxf, which denote the gamma and beta values corresponding to the feature x, respectively.





γx=1·γxf  [Equation 6]





βx=1·βxf  [Equation 7]


According to an example embodiment, the electronic device may perform the normalization operation of the normalization layer 330 described with reference to Equations 1 to 7 in a manner expressed as Equation 8.











EN


(

e
x

)


=



γ
x
f

(



e
x

-

μ
x





σ
x
2

+
ϵ



)

+

β
x
f



,






μ
x

=


1
d





k




(

e
x

)

k




,


σ
x
2

=


1
d





k





(



(

e
x

)

k

-

μ
x


)

2

.








[

Equation





8

]







In Equation 8, ex may be the embedding vector, d may be the dimension of the embedding vector, μx may be the mean of all the elements of the embedding vector, σx2 may be the variance of all the elements of the embedding vector, (ex)k may be the kth element of the embedding vector ex, and γxf and βxf be the linear transformation parameters for each feature.


The linear transformation parameters γxf and βxf for each feature may be learnable parameters. Each of the linear transformation parameters γxf and βxf for each feature may have a scalar value. During the normalization process, the linear transformation parameters γxf and βxf for each feature may be defined such that the same value is applied to all the elements of the embedding vector. In addition, the electronic device may perform the normalization on the embedding vector on the basis of the mean, the variance, and/or the linear transformation parameters for each feature of the elements of the embedding vector.


According to an example embodiment, the embedding vector normalized through the EN may have a relatively larger norm between the embedding vectors. Specifically, the embedding vector normalized by the EN may be distinguished by a large difference in size (norm) depending on a corresponding feature vector. Accordingly, embedding vectors normalized by the EN can make clear the different importances of the features of the feature vectors.


Even when there is no additional variation in model architecture or training process, the electronic device according to the example embodiment may utilize the EN together with the embeddings on the features. Specifically, in order to represent a given feature, the EN may be integrated into all types of CTR prediction models using feature embedding, and thus the EN has high application versatility.


According to various example embodiments, the electronic device may train the neural network model using embedding vectors normalized by the normalization layer 330 as an input of the neural network layer 340. Accordingly, the electronic device may acquire a result value for the feature vector 310 as an output.



FIG. 4 is an example diagram illustrating performance of EN on a feature vector according to various example embodiments of the present disclosure.


According to various example embodiments, the electronic device may perform CTR prediction by utilizing a function having very high cardinality and very high sparsity. Cardinality may refer to the number of tuples constituting one relation and refer to the number of unique values of a specific data set. For example, when data is a set related to “gender,” the cardinality for “gender” may be set to two due to distinction between “male” and “female.” Specifically, when an attribute value related to a given list in a data set is referred to as a tuple, and the number of tuples may correspond to cardinality. Since the elements of the feature vector may differently contribute to the prediction of the CTR of the neural network model, the electronic device may need to perform a function of clearly distinguishing the importance of an embedding vector 410. The result of the EN with respect to the feature vector may naturally imply importance of a corresponding feature vector.


Referring to FIG. 4, the electronic device may acquire the embedding vector 410 as an input in a unit of an index i. For example, the electronic device may acquire the elements of e1, e2, e3, and e4 of the embedding vector 410, where elements of each embedding vector 410 correspond to elements of the features x1, x2, x3, and x4. That is, the electronic device may preserve the significance of individual features even when there is high cardinality and sparsity in the features of the input feature vectors. When the EN is performed on each element of each individual embedding vector of the embedding vector 410, the electronic device may clearly distinguish the importance of each feature of the embedding vector 410 using the feature-wise linear transformation parameter.


According to various example embodiments, the electronic device may perform the EN on the feature vector 410 through a mean 420 and a variance 430 of the elements of the embedding vector 410 and a parameter 440. The parameter 440 is the feature-wise linear parameter, and when the normalization is performed on the embedding vector 410, the parameter 440 may be set such that the same value is used for all the elements of one embedding vector 410 for each feature.


According to an example embodiment, the parameter 440 may include a first parameter 441 and a second parameter 442. Here, the first parameter 441 may correspond to the scale parameter, and the second parameter 442 may correspond to shift parameter. Each of the first parameter 441 and the second parameter 442 may be a vector in the same dimension as the embedding vector 410, and all elements thereof may have the same value. Accordingly, when the embedding vector 410 has the same index, the electronic device may perform the normalization by applying the same parameter value to all elements of the same index.


According to an example embodiment, the electronic device may perform CTR prediction through a factorization machine (FM) which is a representative recommendation system model. In particular, the prediction made in the FM using the EN according to the present disclosure by the electronic device may be derived as in Equation 9 below.












y
^

FM



(
x
)


=


w
0

+




i
=
1

n







w

x
i



+




i
=
1

n










j
=

i
+
1


n









e
^


x
i


·



e
^


x
j


.









[

Equation





9

]







In Equation 9, wo refers to a global bias, and wxi may be a weight for modeling primary interaction of the ith feature xi. custom-character and custom-character may be normalized embeddings corresponding to xi and xj.


According to an example embodiment, it may be assumed that a feature x′ is independent from a click label y and does not include a useful signal with respect to y. For example, the feature x′ may not be significant for predicting y so that a dot product term of a normalized value may be zero for all feature vectors. That is, the dot product may correspond to zero with respect to a feature vector with low importance. When the dot product is not calculated as zero for a feature vector with low importance, noise may occur when the electronic device trains the neural network model.


According to an example embodiment, the electronic device may derive an embedding vector and an embedding matrix including the embedding vector through the EN for the feature vector. In order to satisfy a separate constraint condition (for example, a condition in which a dot product becomes zero) for the feature vector 410, the electronic device may perform a process of identifying other orthogonal elements. Such a constraint may allow a d−1 dimension in which embeddings for different feature are orthogonal to each other to be utilized to reduce an effective dimension of an embedding space and harm a model capacity.


Existing normalization methods may have a side effect of equalizing importance for each element of different feature vectors. The electronic device of the present disclosure may perform the EN by applying the feature-wise linear transformation parameter as the same value with respect to all elements, thereby clearly preserving the meaning of individual feature vectors for clear determination of importance.



FIG. 5 is a schematic diagram illustrating a structure of the performance of the EN on the feature vector according to the various example embodiments of the present disclosure.


According to various embodiments, the electronic device may train the neural network model through an embedding matrix including embedding vectors in which feature vectors included in the training data are embedded.


In the training through the neural network model, the electronic device may perform the normalization on the embedding matrix using the normalization layer 520. A result value obtained by normalizing the embedding matrix for the training data may be expressed as a normalized value. The normalized value may be acquired by applying a parameter 510 (for example, the parameter 440 of FIG. 4) in the normalization layer 520. The parameter 510 which the electronic device applies to the normalization layer 520 so as to train the neural network may correspond to gamma (for example, the first parameter 441 of FIG. 4) and beta (for example, the second parameter 442 of FIG. 4). In addition, a new parameter 530 generated on the basis of the parameter 510 and the normalized value may correspond to a new gamma value and a new beta value. The electronic device may apply the new parameter 530 to other training data to perform the EN. According to various example embodiments, during the training in the neural network model, the electronic device may acquire the new parameter 530 through back propagation.


According to an example embodiment, the electronic device may apply the same parameter to each individual element of the embedding vector, thereby reflecting a weight for the embedding vector. For example, a weight for a feature vector, which has high importance and which is expected to be selected because of high interest of a user among the features and leads to a click, may be determined to be high. That is, in the EN, parameters applied to the individual elements of the embedding vector may be determined by iterative training. The electronic device may optimize the normalization parameter (the parameter 440 of FIG. 4) which iteratively trains the neural network model to be applied to the embedding matrix.



FIG. 6 is a detailed example diagram illustrating normalized values according to execution of an EN process of FIG. 4. Referring to FIG. 6, results of normalized values through the BN method and the LN method may be compared together with the EN method of FIG. 4.


According to an example embodiment, the EN method of the present disclosure may be used together with an embedding technique which maps features to an embedding vector in a latent space. For example, with respect to x which is the given input data, the electronic device may embed the feature vector x to generate an input matrix E including the embedding vector e. For example, the feature vector x may include the features x1, x2, x3, and x4 as respective elements.


Referring to the rightmost graph of FIG. 6, values normalized by the BN (FIG. 6A) the LN (FIG. 6B) do not reflect the weights of the individual embedding vectors and thus do not exhibit a difference in importance. On the other hand, the values normalized by the EN method of FIG. 6C reflect the weights of the individual embedding vectors so that importance may be sufficiently confirmed at e1 and e3. The EN method may acquire a high-accuracy result value at a rate that is faster than the BN or the LN for the same input data, and as shown in FIG. 6, this is comparable in terms of accuracy with other normalization methods.


Referring to FIG. 6, the electronic device may generate a first normalized value on the basis of the first parameter and the second parameter according to the first index for elements in all dimensions included in the first embedding vector in the EN layer. For example, the first embedding vector e1 may be an embedding vector to which x1 is mapped, and the first embedding vector may include 1.0, −1.3, and −0.4 as each element in each dimension. Here, 1.0 may correspond to a first dimensional element of the first embedding vector, −1.3 may correspond to a second dimensional element, and −0.4 may correspond to a third dimensional element.


According to an example embodiment, the electronic device may calculate a mean and a variance of elements of each of the first embedding vector and the second embedding vector. Referring to FIG. 6, the electronic device may calculate a mean and a variance of the embedding vector e1 to which x1 is embedded. A mean and a variance of the EN are calculated according to the individual elements and correspond to the mean and the variance of the individual embedding vector in all dimensions. For example, in the EN of FIG. 6, the mean and the variance of the elements of the first embedding vector are calculated as −0.2 and 0.9, respectively.


According to an example embodiment, the first parameter and the second parameter are vectors in the same dimension, and all elements thereof may have the same value in all dimensions of the first index. Referring to FIG. 6, a value corresponding to the first index of gamma, which is the first parameter, is 2.5. In addition, a value corresponding to the first index of beta, which is the second parameter, is −1.2. When the EN is performed, the electronic device may apply the first parameter and the second parameter, which correspond to element indexes of an individual embedding vector (for example, an element index of the first embedding vector is 1, and an element index of the second embedding vector is 2) to the elements of all dimensions of the individual embedding vector as the same value.


Meanwhile, in the present specification and the accompanying drawings, the example embodiments of the present disclosure have been disclosed, and although specific terms are used, these are used in a general sense to easily describe the technical contents of the present disclosure and help understanding of the present disclosure, these specific terms are not intended to limit the scope of the present disclosure. In addition to the example embodiments disclosed herein, it is obvious to those skilled in the art to which the present disclosure pertains that other modifications can be implemented on the basis of the technical spirit of the present disclosure.


The electronic device or terminal device according to the above-described example embodiments may include interface devices such as a processor, a memory for storing and executing program data, a permanent storage such as a disk drive, a communication port for communicating with an external device, a touch panel, and a key button. Methods implemented as software modules or algorithms may be computer-readable codes or program instructions executable on the processor and may be stored on a computer-readable recording medium. Here, the computer-readable recording medium includes a magnetic storage medium (for example, a read-only memory (ROM), a random-access memory (RAM), a floppy disk, and a hard disk) and an optically readable medium (for example, a compact disc (CD)-ROM) and a digital versatile disc (DVD). The computer-readable recording medium may be distributed in computer systems connected through a network so that the computer-readable code may be stored and executed in a distributed manner. The computer-readable recording medium may be readable by a computer, and the computer-readable code may be stored in the memory and be executed on the processor.


The example embodiments may be implemented by functional block components and various processing operations. These functional blocks may be implemented in any number of hardware and/or software configurations which perform specific functions. For example, the example embodiments may employ integrated circuit components, such as a memory, processing, a logic, a look-up table, and the like, capable of executing various functions under the control of one or more microprocessors or by other control devices. Similar to that components may be implemented as software programming or software components, the example embodiments may include various algorithms implemented as data structures, processes, routines, or a combination of other programming components and may be implemented in a programming or scripting language such as C, C++, Java, assembler, or the like. The functional aspects may be implemented as algorithms executed on one or more processors. In addition, the example embodiments may employ the related art for an electronic environment setting, signal processing, and/or data processing. The terms such as a “mechanism,” an “element,” a “part,” and a “component” may be used broadly and may be not limited to mechanical and physical components. These terms may include the meaning of a series of routines of software in association with a processor or the like.


The above-described example embodiments are merely examples, and other embodiments may be implemented within the scope of the appended claims which will be described below.

Claims
  • 1. A method of training a neural network model for predicting a click-through rate (CTR) of a user in an electronic device, the method comprising: mapping a feature included in a feature vector to an embedding vector;normalizing an embedding vector on the basis of a feature-wise linear transformation parameter; andinputting the normalized embedding vector into a neural network layer,wherein the feature-wise linear transformation parameter may be defined such that the same value is applied to all elements of the embedding vector during the normalizing.
  • 2. The method of claim 1, wherein the normalizing include: calculating a mean of the elements of the embedding vector;calculating a variance of the elements of the embedding vector; andnormalizing the embedding vector on the basis of the mean, the variance, and the feature-wise linear transformation parameter.
  • 3. The method of claim 1, wherein the feature-wise linear transformation parameter includes a scale parameter and a shift parameter.
  • 4. The method of claim 3, wherein each of the scale parameter and the shift parameter is a vector having the same dimension as the embedding vector, and all elements thereof has the same value.
  • 5. The method of claim 3, wherein each of the scale parameter and the shift parameter has a scalar value.
  • 6. The method of claim 1, wherein the normalizing is an operation of performing calculation of Equation 1 below:
  • 7. A computer program stored in a computer-readable recording medium in combination with hardware to execute the method of claim 1.
  • 8. A neural network system for predicting a click through rate (CTR) of a user implemented by at least one electronic device, the neural network system comprising: an embedding layer;a normalization layer; anda neural network layer model,wherein the embedding layer maps a feature included in a feature vector to an embedding vector,the normalization layer normalizes the embedding vector on the basis of a feature-wise linear transformation parameter,the neural network layer performs a neural network operation on the basis of the normalized embedding vector, andthe feature-wise linear transformation parameter is defined such that the same value is applied to all elements of the embedding vector in the normalization process.
  • 9. The neural network system of claim 8, wherein the normalization layer calculates a mean of the elements of the embedding vector, calculates a variance of the elements of the embedding vector, and normalizes the embedding vector on the basis of the mean, the variance, and the feature-wise linear transformation parameter.
  • 10. The neural network system of claim 8, wherein the feature-wise linear transformation parameter includes a scale parameter and a shift parameter.
  • 11. The neural network system of claim 10, wherein each of the scale parameter and the shift parameter is a vector in the same dimension as the embedding vector, and all elements thereof has the same value.
  • 12. The neural network system of claim 10, wherein each of the scale parameter and the shift parameter has a scalar value.
  • 13. The neural network system of claim 8, wherein the normalization layer performs calculation of Equation 2 below:
Priority Claims (2)
Number Date Country Kind
10-2020-0188326 Dec 2020 KR national
10-2021-0158515 Nov 2021 KR national