This application claims priority to Korean Patent Application No. 2017-0122363 filed on Sep. 22, 2017 in the Korean Intellectual Property Office (KIPO), the entire contents of which are hereby incorporated by reference.
Example embodiments of the present invention generally relate to the field of a method for detecting an abnormal session of a server, and more specifically, to a method for detecting an abnormal session using a convolutional neural network and a long short-term memory (LSTM) neural network.
In general, while a server provides a client with a service, the client transmits request messages (e.g., http requests) to the server, and the server generates response messages (e.g., an http response) in response to the requests. The request messages and the response messages generated in the service providing process are arranged according to a time sequence, and the arranged messages are referred to as a session (e.g., an http session).
When an error occurs in an operation of the server or an attacker gains access by highjacking login information of another user, the arrangement feature of the request messages and the response message is different than usual, thereby producing an abnormal session having a feature different from that of a normal session. In order to rapidly recover a service error, a technology for monitoring sessions and detecting an abnormal session is needed. Meanwhile, as a technology of automatically extracting a feature of data and categorizing the data, machine learning is garnering attention.
Machine learning is a type of artificial intelligence (AI), in which a computer performs predictive tasks, such as regression, classification, and clustering on the basis of data learned by itself.
Deep learning is a field of the machine learning, in which a computer is trained to have a human's way of thinking, and which is defined as a set of machine learning algorithms that attempt a high-level abstraction (a task of abstracting key contents or functions in a large amount of data or complicated material) through a combination of non-linear transformation techniques.
A deep learning structure is a concept designed based on artificial neural networks (ANNs). The ANN is an algorithm that mathematically models a virtual neuron and simulates the virtual neuron such that the virtual neuron is provided with a learning capability similar to that of a human's brain, and in many cases, an ANN is used for pattern recognition. An artificial neural network model used in the deep learning has a structure in which linear fitting and nonlinear transformation or activation are repeatedly stacked. The neural network model used in the deep learning includes a deep neural network (DNN), a convolutional neural network (CNN), a recurrent neural network (RNN), a restricted Boltzmann machine (RBM), a deep belief network (DBN), a deep Q-network, or the like.
Accordingly, example embodiments of the present invention are provided to substantially obviate one or more problems due to limitations and disadvantages of the related art.
Example embodiments of the present invention provide a method for detecting an abnormal session using an artificial neural network.
In some example embodiments, a method for detecting an abnormal session including a request message received by a server from a client and a response message generated by the server includes: transforming at least a part of messages included in the session into data in the form of a matrix; transforming the data in the form of the matrix into a representation vector, a dimension of which is lower than a dimension of the matrix of the data using a convolutional neural network; and determining whether the session is abnormal by arranging the representation vectors obtained from the messages in an order in which the messages are generated to compose a first representation vector sequence, and analyzing the first representation vector sequence using an long short term memory (LSTM) neural network.
The transforming of the at least a part of the messages into the data in the form of the matrix may include transforming each of the messages into data in the form of a matrix by transforming a character included in each of the messages into a one-hot vector.
The LSTM neural network may include an LSTM encoder including a plurality of LSTM layers and an LSTM decoder having a structure symmetrical to the LSTM encoder.
The LSTM encoder may sequentially receive the representation vectors included in the first representation vector sequence and output a hidden vector having a predetermined magnitude, and the LSTM decoder may receive the hidden vector and output a second representation vector sequence corresponding to the first representation vector sequence.
The determining of whether the session is abnormal may include determining whether the session is abnormal on the basis of a difference between the first representation vector sequence and the second representation vector sequence.
The LSTM decoder may output the second representation vector sequence by outputting estimation vectors, each corresponding to one of the representation vectors included in the first representation vector sequence, in a reverse order to an order of the representation vectors included in the first representation vector sequence.
The LSTM neural network may sequentially receive the representation vectors included in the first representation vector sequence and output an estimation vector with respect to a representation vector immediately following the received representation vector.
The determining of whether the session is abnormal may include determining whether the session is abnormal on the basis of a difference between the estimation vector output by the LSTM neural network and the representation vector received by the LSTM neural network.
The method may further include training the convolutional neural network and the LSTM neutral network.
The convolutional neural network may be trained by inputting training data to the convolutional neural network; inputting an output of the convolutional neural network to a symmetric neural network having a structure symmetrical to the convolutional neural network; and updating weight parameters used in the convolutional neural network on the basis of a difference between the output of the symmetric neural network and the training data.
The LSTM neural network may include an LSTM encoder including a plurality of LSTM layers and an LSTM decoder having a structure symmetrical to the LSTM encoder, and the LSTM neural network may be trained by inputting training data to the LSTM encoder; inputting a hidden vector output from the LSTM encoder and the training data to the LSTM decoder; and updating weight parameters used in the LSTM encoder and the LSTM decoder on the basis of a difference between an output of the LSTM decoder and the training data.
In other example embodiments, a method for detecting an abnormal session including a request message received by a server from a client and a response message generated by the server includes: transforming at least a part of messages included in the session into data in the form of a matrix; transforming the data in the form of the matrix into a representation vector a dimension of which is lower than a dimension of the matrix of the data using a convolutional neural network; and determining whether the session is abnormal by arranging the representation vectors obtained from the messages in an order in which the messages are generated to compose a first representation vector sequence, and analyzing the first representation vector sequence using a gated recurrent unit (GRU) neural network.
The GRU neural network may include a GRU encoder including a plurality of GRU layers and a GRU decoder having a structure symmetrical to the GRU encoder.
The GRU encoder may sequentially receive the representation vectors included in the first representation vector sequence and output a hidden vector having a predetermined magnitude, and the GRU decoder may receive the hidden vector and output a second representation vector sequence corresponding to the first representation vector sequence.
The determining of whether the session is abnormal may include determining whether the session is abnormal on the basis of a difference between the first representation vector sequence and the second representation vector sequence.
The GRU decoder may output the second representation vector sequence by outputting estimation vectors, each corresponding to one of the representation vectors included in the first representation vector sequence, in a reverse order to an order of the representation vectors included in the first representation vector sequence.
The GRU neural network may sequentially receive the representation vectors included in the first representation vector sequence and output an estimation vector with respect to a representation vector immediately following the received representation vector.
The determining of whether the session is abnormal may include determining whether the session is abnormal on the basis of a difference between a prediction value output by the GRU neural network and the representation vector received by the GRU neural network.
Example embodiments of the present invention will become more apparent by describing example embodiments of the present invention in detail with reference to the accompanying drawings, in which:
While the present invention is susceptible to various modifications and alternative embodiments, specific embodiments thereof are shown by way of example in the drawings and will be described. However, it should be understood that there is no intention to limit the present invention to the particular embodiments disclosed, but on the contrary, the present invention is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the present invention.
It will be understood that, although the terms first, second, etc. may be used herein to describe various elements, the elements should not be limited by the terms. The terms are only used to distinguish one element from another. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of the present invention. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items
It will be understood that when an element is referred to as being “connected” or “coupled” to another element, it can be directly connected or coupled to another element or intervening elements may be present. In contrast, when an element is referred to as being “directly connected” or “directly coupled” to another element, there are no intervening elements present.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the present invention. As used herein, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises,” “comprising,” “includes,” and/or “including,” when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
Unless otherwise defined, all terms including technical and scientific terms and used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
Hereinafter, example embodiments of the present invention will be described with reference to the accompanying drawings in detail. For better understanding of the present invention, same reference numerals are used to refer to the same elements through the description of the figures, and the description of the same elements will be omitted.
The apparatus 100 shown in
Referring to
The processor 110 may execute a program command stored in the memory 120 and/or the storage device 125. The processor 110 may refer to a central processing unit (CPU), a graphics processing unit (GPU), or a dedicated processor by which the methods according to the present invention are performed. The memory 120 and the storage device 160 may include a volatile storage medium and/or a non-volatile storage medium. For example, the memory 120 may include a read only memory (ROM) and/or a random-access memory (RAM).
The memory 120 may store at least one command that is executed by the processor 110.
The commands stored in the memory 120 may be updated through machine learning of the processor 110. The processor 110 may change commands stored in memory through machine learning. The machine learning performed by the processor 110 may be implemented in a supervised learning method or an unsupervised learning method. However, the example embodiment is not limited thereto. For example, the machine learning may be implemented in other methods such as a reinforcement learning method and the like.
Referring to
Referring to
Referring again to
The processor 110 may transform each of the extracted messages into data in the form of a matrix. The processor 110 may transform a character included in each of the messages into a one-hot vector.
Referring to
The one-hot vector may include only one component having a value of one and the remaining components having a value of zero, or may include all components having a value of zero. In the one-hot vector, the position of a component having a value of ‘1’ may vary with the type of the character represented by the one hot vector. For example, as shown in
In the one-hot vector, the position of a component having a value of 1 may vary with the order of the character represented by the one-hot vector.
When a total number of the types of characters is F(0) (e.g., 69 (twenty-six alphabetic characters, ten numbers from zero to nine, new line, thirty-three special characters), the processor 110 may transform each message into a matrix having a magnitude of F(0)×L(0). When the length of the message is smaller than L(0), any of missing representation vectors may be transformed to a zero-representation vector. As another example, when the length of the message is larger than L(0), only the characters corresponding in number to L(0) may be transformed to one-hot vectors.
Referring again to
Referring to
The convolutional neural network may extract a feature of input data and generate output data having a scale smaller than that of the input data and output the generated output data. The convolutional neural networks may receive data in the form of an image or matrix.
The convolution and pooling layer may receive matrix data and perform the convolution operation on the received matrix data.
Referring to
The processor 110 may perform the convolution operation on the image 0I while changing the position of the kernel FI on the image 0I. The processor 110 may output a convolution image from the calculated convolution values.
Since the number of cases in which the filter kernel FI shown in
In
The convolution and pooling layer Layer 1 may perform a pooling operation on each of the feature maps output by the convolution operation, thereby reducing the size of the feature map. The pooling operation may be an operation of merging adjacent pixels in the feature map to obtain a single representative value. According to the pooling operation in the convolution and pooling layer, the size of the feature map may be reduced.
The representative value may be obtained in various methods. For example, the processor 110 may determine a maximum value among values of p×q adjacent pixels in the feature map to be the representative value. As another example, the processor 110 may determine the average value of values of p×q adjacent pixels in the feature map to be the representative value.
Referring again to
a
k
(N
)(x,y) for 0≤k≤FN
The feature maps output from the last convolution and pooling layer Layer Nc may be input to the first full connected layer Layer Nc+1. The first fully connected layer may transform the received feature maps to a one-dimensional representation vector a(N
The first fully connected layer may multiply the transformed one-dimensional representation vector by a weight matrix. For example, the operation performed by the first fully connected layer may be represented by Equation 1.
In Equation 1, W(N
Referring to Equation 1, the first fully connected layer may output the representation vector having a magnitude of AN
Referring to
In Equation 2, a(1)(t) denotes an output representation vector of the first fully connected layer. w(l)(t, u) denotes the weight matrix used by the first fully connected layer. ϕ(l) denotes an activation function used by the lth fully connected layer. a(t−l)(u) denotes the output representation vector of a l−1th fully connected layer, and may be an input representation vector for the first fully connected layer.
An output layer may receive an output representation vector a
In Equation 3, x(N
The output layer may calculate final output values for the classes of the output representation vector z(N
{circumflex over (γ)}(t)=ϕN
In Equation 4, ϕ(N
As another example, the output layer may calculate the final output value using a softmax function. The process of calculating the final output representation vector in the output layer may be expressed by Equation 5.
Referring to Equation 5, the output layer may calculate the final output value using an exponential function for a class value of the output representation vector.
With 0≤c−1 shown in Equations 3 to 5, the convolutional neural network may output the representation vector having a magnitude of C×1. That is, the convolutional neural network may receive matrix data having a magnitude of F(0)×L(0) and output the representation vector having a magnitude of C×1.
The convolutional neural network may also be trained by an unsupervised learning method. The training method for the convolutional neural network will be described below with reference to
Referring again
x0, x1, . . . xS−1
x1 may denote a representation vector generated from a tth message of the session (a request message or a response message).
In operation S160, the processor 110 may determine whether the session is abnormal by analyzing the first representation vector sequence. The processor 110 may analyze the first representation vector sequence using a long short-term memory (LSTM) neural network. The LSTM neural network may avoid a long-term dependence of a recurrent neural network (RNN) by selectively updating a cell state in which information is stored. Hereinafter, the LSTM neural network will be described.
Referring to
An nth layer may receive a hidden vector htn−1 from an (n−1)th layer. The nth layer may output a hidden vector htn by using the hidden vector ht−1n with respect to a previous representation vector and the hidden vector htn−1 received from the (n−1)th layer.
Hereinafter, an operation of each of the layers of the LSTM neural network will be described. In the following description, the operations of the layers will be described with reference to the 0th layer. The nth layer may operate in a similar manner as that in the operation of the 0th layer except for receiving the hidden vector htn−1 instead of the representation vector x
Referring to
The forget gate 810 may calculate ft by using a tth representation vector xt, a previous cell state ct−1, and a hidden vector ht−1 with respect to a previous representation vector. The forget gate 810 may determine information which is to be discarded among the existing information and the extent to which the information is discarded during the calculation of ft. The forget gate 810 may calculate ft using Equation 6.
f
t=σ(Wxfxt+whfh(t−1)+Wcfc(t−1)+bf) [Equation 6]
In Equation 6, σ denotes a sigmoid function. bf denotes a bias. wxt denotes a weight for xt, and Wht denotes a weight for ht−1, and Wcf denotes a weight for ct−1.
The input gate 850 may determine new information which is to be reflected in the cell state. The input gate 850 may calculate new information to be reflected in the cell state using Equation 7.
i
t=σ(Wxixt+Whih(t−1)+Wcic(t−1)+bi) [Equation 7]
In Equation 7, σ denotes a sigmoid function. bi denotes a bias. Wxi denotes a weight for xt, and Whi denotes a weight for ht−1, and Wci denotes a weight for ct−1.
The input gate 850 may calculate a candidate value for a new cell state ct. The input gate 850 may calculate the candidate value
using Equation 8.
=tanh(Wxcxt+Whch(t−1)+bc) [Equation 8]
In Equation 8, bc denotes a bias. Wxc denotes a weight for xt and Whc denotes a weight for hi−1.
The cell line may calculate the new cell state ct using ft, ft, and .
For example, ct may be calculated by Equation 9.
c
t
=f
t
*c
t−1
+i
t
*
[Equation 9]
Referring to Equation 8, Equation 9 may be expressed as Equation 10.
c
t
=f
t
c
(t−1)
+i
t tanh(Wxcxt+whch(t−1)+bc) [Equation 10]
The output gate 860 may calculate an output value using the cell state ct. For example, the output gate 860 may calculate the output value according to Equation 11.
o
t=σ(Wxoxt+Whoh(t−1)+Wcoct+bo) [Equation 11]
In Equation 11, σ denotes a sigmoid function. bo denotes a bias. Wxo denotes a weight for xt, and Who denotes a weight for ht−1, and Wco denotes a weight for ct.
The LSTM layer may calculate the hidden vector ht for the representation vector xt using the output value otand the new cell state ct. For example, ht may be calculated according to Equation 12.
h
t
=o
t tanh(ct) [Equation 12]
The LSTM neural network may include an LSTM encoder and an LSTM decoder having a structure symmetrical to the LSTM encoder. The LSTM encoder may receive a first representation vector sequence. The LSTM encoder may receive the first representation vector sequence and output a hidden vector having a predetermined magnitude. The LSTM decoder may receive the hidden vector output from the LSTM encoder. The LSTM decoder may intactly use the same weight matrix and bias value as those used in the LSTM encoder. The LSTM decoder may output a second representation vector sequence corresponding to the first representation vector sequence. In the LSTM decoder, the second representation vector sequence may include estimation vectors corresponding to the representation vectors included in the first representation vector sequence. The LSTM decoder may output the estimated vectors in a reverse order. That is, the LSTM decoder may output the estimated vectors in the reverse order to the order of the representation vectors in the first representation vector sequence.
Referring to
Upon receiving the last representation vector x(S−1) of the first representation vector sequence, the LSTM encoder may output hidden vectors h(S−1)(0) to h(S−1)(N
The LSTM decoder may receive the hidden vectors h(S−1)(0) to h(S−1)(N
The LSTM decoder may output the second representation vector sequence {circumflex over (x)}(S−1), x(S−2), . . . including estimation vectors with respect to the first representation vector sequence x0, x1, . . . xS−1. The LSTM decoder may output the estimated vectors in the reverse order (an order reverse to the order of the representation vectors in the first representation vector sequence).
The LSTM decoder may output hidden vectors h(S−2)(0) to h(S−2)(N
When the LSTM decoder outputs the second representation vector sequence {circumflex over (x)}(S−1), {circumflex over (x)}(S−2), . . . {circumflex over (x)}0, the processor 110 may compare the second representation vector sequence with the first representation vector sequence. For example, the processor 110 may determine whether the session is abnormal using Equation 13.
In Equation 13, S denotes the number of messages (a request message or a response message) extracted from the session. xt is a representation vector output from a tth message, and {circumflex over (x)}t is an estimated vector that is output by the LSTM decoder and corresponds to xt. The processor 110 may determine whether a difference between the first representation vector sequence and the second representation vector sequences is smaller than a predetermined reference value δ. When the difference between the first and second representation vector sequences is greater than the reference value δ, the processor 110 may determine that the session is abnormal.
In the above description, an example has been described in which the LSTM neural network includes an LSTM encoder and an LSTM decoder. However, the example embodiment is not limited thereto. For example, the LSTM neural network may directly output an estimated vector.
Referring to
For example, the LSTM neural network may receive x0 and output an estimated vector {circumflex over (x)}1 with respect to x1. Similarly, the LSTM neural network may receive xt−1 and output {circumflex over (x)}t. The processor 110 may determine whether the session is abnormal based on the difference between the estimation vectors {circumflex over (x)}1, {circumflex over (x)}2, . . . {circumflex over (x)}S−1 output by the LSTM neural network and the representation vectors x1, x2, . . . xS−1 received by the LSTM neural network. For example, the processor 110 may use determine whether the session is abnormal using Equation 14.
The processor 110 may determine whether the difference between the representation vectors x1, x2, . . . xS−1 and the estimated vectors {circumflex over (x)}1, {circumflex over (x)}2, . . . xS−1, is smaller than a predetermined reference value δ. When the difference is greater than the reference value δ, the processor 110 may determine that the session is abnormal.
In the above description, an example in which the processor 110 determines whether the session is abnormal using the LSTM neural network has been described. However, the example embodiment is not limited thereto. For example, in operation S160, the processor 110 may determine whether the session is abnormal using a gated recurrent unit (GRU) neural network.
Referring to
An nth layer may receive stn−1 from an (n−1)th layer. As another example, the nth layer may receive stn−1 and xt from the (n−1)th layer. The nth layer may output a hidden vector stn by using a hidden vector st−1n with respect to a previous representation vector and the hidden vector st(n−1) received from the (n−1)th layer.
Hereinafter, an operation of each of the layers of the GRU neural network will be described. In the following description, an operation of the layer will be described with reference to the 0th layer. The nth layer operates in a similar manner as that in the operation of the 0th layer except for receiving the hidden vector output st(n−1) or both the hidden vector output st(n−1) and the representation vector xt, instead of receiving the representation vector xt.
Referring to
For example, the reset gate r may calculate a reset parameter r using Equation 15.
r=σ(xtUr=st−1Wr) [Equation 15]
In Equation 15, σ denotes a sigmoid function. Ur denotes a weight for xt, and Wr denotes a weight for st−1.
For example, the update gate z may calculate a update parameter z using Equation 16.
z=σ(xtUz+st−1Wz) [Equation 16]
In Equation 16, σ denotes a sigmoid function. Ur denotes a weight for xt, and Wz denotes a weight for st−1.
The GRU layer may calculate an estimated value h for a new hidden vector according to Equation 17.
h=tanh(xtUh+(st−1 ∘ r)Wh) [Equation 17]
In Equation 17, σ denotes a sigmoid function. Uh denotes a weight for x
The GRU layer may calculate a hidden vector st for xt by using h calculated in Equation 17. For example, the GRU layer may calculate the hidden vector st for xt by using Equation 18.
s
t=(1−z)∘ h=z ∘ st−1 [Equation 18]
The GRU neural network may operate in a similar manner as that in the operation of the LSTM neural network, except for the configuration of each layer. For example, the example embodiments of the LSTM neural network shown in
For example, the GRU neural network may include a GRU encoder and a GRU decoder similar to that shown in
The GRU decoder may output a second representation vector sequence {circumflex over (x)}(S−1), {circumflex over (x)}(S−2), . . . including estimation vectors with respect to x0, x1, . . . xS−1. The GRU decoder may use the same weight matrix and bias value as those used in the GRU encoder as it is. The GRU decoder may output the estimated vectors in the reverse order (a reverse order to the order of the representation vectors in the first representation vector sequence).
The processor 110 may compare the first representation vector sequence with the second representation vector sequence using Equation 13, thereby determining whether the session is abnormal.
As another example, the GRU neural network may not be divided into an encoder and a decoder. For example, the GRU neural network may directly output estimated vectors as described with reference to
The GRU neural network may receive x0 and output an estimated vector {circumflex over (x)}1 for x1. Similarly, the GRU neural network xt−1 may receive and output x
In the following description of the example embodiment of
Referring to
For example, the processor 110 may train the convolutional neural network in an unsupervised learning method. As another example, when training data including messages and output representation vectors labeled on the messages exists, the processor 110 may train the convolutional neural network in a supervised learning method.
In the case of an unsupervised learning, the processor 110 may connect a symmetric neural network having a structure symmetrical to the convolutional neural network to the convolutional neural network. The processor 110 may input the output of the convolutional neural network to the symmetric neural network.
Referring to
The processor 110 may update weight parameters of the convolutional neural network on the basis of the difference between an output of the symmetric neural network and an input to the convolutional neural network. For example, the processor 110 may determine a cost function on the basis of at least one of a reconstruction error and a mean squared error between the output of the symmetric neural network and the input to the convolutional neural network. The processor 110 may update the weight parameters in a direction that the cost function determined by the above described method is minimized.
For example, the processor 110 may train the LSTM (GRU) neural network in an unsupervised learning method.
When the LSTM (GRU) neural network includes an LSTM (GRU) encoder and an LSTM (GRU) decoder, the processor 110 may calculate the cost function by comparing representation vectors input to the LSTM (GRU) encoder with representation vectors output from the LSTM (GRU) decoder. For example, the processor 110 may calculate the cost function using Equation 19.
In Equation 19, J(θ) denotes a cost function value, Card(T) denotes the number of sessions included in training data, Sn denotes the number of messages included in an nth training session, xt(n) denotes a representation vector corresponding to a tth message of the nth training session, xtn and denotes an estimated vector output from the LSTM (GRU) decoder, that is, an estimation vector for xt(n). In addition, θ denotes a set of weight parameters of the LSTM (GRU) neural network. For example, in the case of a LSTM neural network, θ≡[WxiWxi, . . . W0)
The processor 110 may update the weight parameters included in θ in the direction that the cost function J(θ) shown in Equation 19 is minimized.
The methods for detecting an abnormal session according to the example embodiments of the present invention have been described above with reference to
As is apparent from the above, messages included in a session are transformed to low dimensional representation vectors using a convolutional neural network. In addition, a representation vector sequence included in the session is analyzed and an abnormality of the session is determined, using an LSTM or GRU neural network. According to example embodiments, it is easily determined whether a session is abnormal using an artificial neural network without an intervention of a manual task.
The methods according to the present invention may be implemented in the form of program commands executable by various computer devices and may be recorded in a computer readable media. The computer readable media may be provided with each or a combination of program commands, data files, data structures, and the like. The media and program commands may be those specially designed and constructed for the purposes, or may be of the kind well-known and available to those having skill in the computer software arts.
Examples of the computer readable storage medium include a hardware device constructed to store and execute a program command, for example, a read-only memory (ROM), a random-access memory (RAM), and a flash memory. The program command may include a high-level language code executable by a computer through an interpreter in addition to a machine language code made by a compiler. The described hardware devices may be configured to act as one or more software modules in order to perform the operations of the present invention, or vice versa.
While the example embodiments of the present invention and their advantages have been described in detail, it should be understood that various changes, substitutions and alterations may be made herein without departing from the scope of the present invention.
Number | Date | Country | Kind |
---|---|---|---|
10-2017-0122363 | Sep 2017 | KR | national |