The following description relates to a method and apparatus with a neural network and training.
A neural network may have an operation structure in which a large number of processing elements with simple functions are connected in parallel, and may be used to solve issues that are hard to solve by the existing methods. To classify input patterns into predetermined groups, the neural network may implement learning or training. The neural network may have a generalization ability to generate relatively correct outputs for input patterns yet to be used for training based on training results.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
In one general aspect, a processor-implemented neural network method includes: determining an adaptive parameter and an adaptive mask of a current task to be learned among a plurality of tasks of a neural network; determining a model parameter of the current task based on the adaptive parameter, the adaptive mask, and a shared parameter of the plurality of tasks; and training the model parameter and an adaptive parameter of a previous task with respect to the current task, wherein the adaptive parameter of the previous task and the shared parameter are trained with respect to the previous task.
The training may include training the adaptive parameter of the previous task such that a change in a model parameter of the previous task is minimized as the shared parameter is trained with respect to the current task.
The training may include training the model parameter based on training data of the current task.
The determining of the model parameter may include determining the model parameter of the current task by applying the adaptive mask of the current task to the shared parameter and then adding the adaptive parameter to a result of the applying.
The applying may include a vector-wise multiplication between the shared parameter and the adaptive mask of the current task.
The determining of the adaptive parameter and the adaptive mask may include determining the adaptive parameter based on the shared parameter trained with respect to the previous task, and determining the adaptive mask at random.
The determining of the adaptive parameter and the adaptive mask, the determining of the model parameter, and the training may be iteratively performed with respect to each of the plurality of tasks.
The method may include: grouping a plurality of adaptive parameters of the plurality of tasks into a plurality of groups; and decomposing each of the adaptive parameters into a locally shared parameter shared by adaptive parameters grouped into a same group and a second adaptive parameter sparser than the respective adaptive parameter, based on whether elements included in each of the adaptive parameters grouped into the same group satisfy a predetermined condition.
The model parameter of the current task may be determined based on the shared parameter, the locally shared parameter of the group to which the current task belongs, and a second adaptive parameter and the adaptive mask of the current task.
The predetermined condition may be corresponding elements included in each of the adaptive parameters grouped into the same group having a value difference less than or equal to a threshold.
The grouping may include grouping the plurality of adaptive parameters based on K-means clustering, such that adaptive parameters of the plurality of adaptive parameters corresponding to similar tasks are grouped into a same group among the plurality of groups.
A structure of the neural network may be maintained unchanged, and a connection weight between nodes included in the neural network may be determined based on the model parameter.
The method may include obtaining output data based on the trained model parameter and input data to be inferred.
A non-transitory computer-readable storage medium may store instructions that, when executed by a processor, configure the processor to perform the method.
In another general aspect, a processor-implemented neural network method includes: selecting an adaptive parameter and an adaptive mask of a target task to be performed among a plurality of tasks of a neural network; determining a model of the target task based on the adaptive parameter, the adaptive mask, and a shared parameter of the plurality of tasks; and obtaining output data from the model by inputting input data to be inferred into the determined model.
The determining of the model may include determining the model parameter of the target task by applying the adaptive mask of the target task to the shared parameter and adding the adaptive parameter to a result of the applying, and determining a connection weight between nodes included in the neural network based on the model parameter.
The adaptive parameter may be among adaptive parameters of the plurality of tasks grouped into a plurality of groups, and the adaptive parameter may be determined based on a locally shared parameter of a group to which the target task belongs and a second adaptive parameter corresponding to the target task and being sparser than the adaptive parameter.
An adaptive parameter of a task to be removed from among the plurality of tasks may be deleted.
The plurality of tasks may have a same data type to be input into the neural network.
In another general aspect, a neural network apparatus includes: one or more processors configured to: determine an adaptive parameter and an adaptive mask of a current task to be learned among a plurality of tasks of a neural network, determine a model parameter of the current task based on the adaptive parameter, the adaptive mask, and a shared parameter of the plurality of tasks, and train the model parameter and an adaptive parameter of a previous task with respect to the current task, wherein the adaptive parameter of the previous task and the shared parameter are trained with respect to the previous task.
For the training, the one or more processors may be configured to train the adaptive parameter of the previous task such that a change in a model parameter of the previous task is minimized as the shared parameter is trained with respect to the current task.
For the training, the one or more processors may be configured to train the model parameter based on training data of the current task.
For the determining of the model parameter, the one or more processors may be configured to determine the model parameter of the current task by applying the adaptive mask of the current task to the shared parameter and then adding the adaptive parameter thereto.
In another general aspect, a neural network apparatus includes: one or more processors configured to: select an adaptive parameter and an adaptive mask of a target task to be performed among a plurality of tasks of a neural network, determine a model of the target task based on the adaptive parameter, the adaptive mask, and a shared parameter of the plurality of tasks, and obtain output data from the model by inputting input data to be inferred into the determined model.
In another general aspect, a processor-implemented neural network method includes: determining a model parameter of a current task, among a plurality of tasks of a neural network, based on an adaptive parameter and an adaptive mask of the current task, and a previously-trained shared parameter of the plurality of tasks; training, based on training data of the current task, the model parameter of the current task and a previously-trained adaptive parameter of a previous task with respect to the current task; and redetermining a previously-determined model parameter of the previous task based on the trained adaptive parameter of the previous task.
Other features and aspects will be apparent from the following detailed description, the drawings, and the claims.
Throughout the drawings and the detailed description, unless otherwise described or provided, the same drawing reference numerals will be understood to refer to the same elements, features, and structures. The drawings may not be to scale, and the relative size, proportions, and depiction of elements in the drawings may be exaggerated for clarity, illustration, and convenience.
The following detailed description is provided to assist the reader in gaining a comprehensive understanding of the methods, apparatuses, and/or systems described herein. However, various changes, modifications, and equivalents of the methods, apparatuses, and/or systems described herein will be apparent after an understanding of the disclosure of this application. For example, the sequences of operations described herein are merely examples, and are not limited to those set forth herein, but may be changed as will be apparent after an understanding of the disclosure of this application, with the exception of operations necessarily occurring in a certain order. Also, descriptions of features that are known in the art, after an understanding of the disclosure of this application, may be omitted for increased clarity and conciseness.
Although terms of “first” or “second” are used herein to describe various members, components, regions, layers, or sections, these members, components, regions, layers, or sections are not to be limited by these terms. Rather, these terms are only used to distinguish one member, component, region, layer, or section from another member, component, region, layer, or section. Thus, a first member, component, region, layer, or section referred to in examples described herein may also be referred to as a second member, component, region, layer, or section without departing from the teachings of the examples.
Throughout the present disclosure, when an element, such as a layer, region, or substrate, is described as being “on,” “connected to,” or “coupled to” another element, it may be directly “on,” “connected to,” or “coupled to” the other element, or there may be one or more other elements intervening therebetween. In contrast, when an element is described as being “directly on,” “directly connected to,” or “directly coupled to” another element, there can be no other elements intervening therebetween. Likewise, expressions, for example, “between” and “immediately between” and “adjacent to” and “immediately adjacent to” may also be construed as described in the foregoing.
The terminology used herein is for the purpose of describing particular examples only and is not to be limiting of the disclosure. As used herein, the singular forms “a”, “an”, and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. As used herein, the term “and/or” includes any one and any combination of any two or more of the associated listed items. As used herein, the terms “include,” “comprise,” and “have” specify the presence of stated features, integers, steps, operations, elements, components, numbers, and/or combinations thereof, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components numbers, and/or combinations thereof. The use of the term “may” herein with respect to an example or embodiment (for example, as to what an example or embodiment may include or implement) means that at least one example or embodiment exists where such a feature is included or implemented, while all examples are not limited thereto.
Unless otherwise defined, all terms, including technical and scientific terms, used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure pertains consistent with and after an understanding of the present disclosure. Terms, such as those defined in commonly used dictionaries, are to be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and the present disclosure, and are not to be interpreted in an idealized or overly formal sense unless expressly so defined herein.
Hereinafter, examples will be described in detail with reference to the accompanying drawings. The following specific structural or functional descriptions are exemplary to merely describe the examples, and the scope of the examples is not limited to the descriptions provided in the present disclosure. Various changes and modifications can be made thereto by those of ordinary skill in the art based on an understanding of the disclosure of the present application. Like reference numerals in the drawings denote like elements, and a known function or configuration will be omitted herein.
Referring to
The input layer 111 may include one or more nodes into which input data are directly input, not through a link in a relationship with other nodes of a previous layer. The output layer 115 may include one or more nodes having no output node connected with other nodes of a subsequent layer. The hidden layer 113 may include remaining layer(s) of the neural network 110, other than the input layer 111 and the output layer 115. Although
The neural network used in the example may be provided in various structures. The number of hidden layers included in the neural network 110, the number of nodes included in each layer, and/or the connection between nodes may vary depending on an example.
An output of a node included in a layer may be input into one or more nodes of another layer. For example, an output of a node included in the input layer 111 may be transferred to the nodes of the hidden layer 113. The nodes may be connected to each other by “links”, and nodes connected through a link may form a relative relationship of an input node and an output node. The concept of an input node and an output node is relative, and a predetermined node which is an output node in the relationship with a node may be an input node in the relationship with another node, and vice versa.
A connection weight may be set for a link between nodes. For example, a predetermined connection weight may be set for a link between nodes, and the connection weight may be adjusted or changed. Neural networks having different connection weights may have different characteristics. The connection weight may amplify, reduce, or maintain a relevant data value, thereby determining a degree of influence of the data value on a final result. The connection weight may correspond to a model parameter of the neural network 110.
In a relationship of an input node and an output node connected through a link, an output value of the output node may be determined based on data input into the input node and a connection weight of the link between the input node and the output node. For example, when one or more input nodes are connected to a single output node by respective links, an output value of the output node may be determined based on input values input into the one or more input nodes and connection weights of the links between the one or more input nodes and the output node.
Each node included in the hidden layer 113 may receive an output of an activation function related to weighted inputs of the nodes included in a previous layer. The weighted inputs may be obtained by multiplying inputs of the nodes included in the previous layer by connection weights. The activation function corresponds to, for example, a sigmoid, a hyperbolic tangent (tanh), or a rectified linear unit (ReLU). The weighted inputs of the nodes included in the previous layer are input into each node included in the output layer 115. A process of inputting weighted data from a predetermined layer to the next layer may be referred to as propagation.
The neural network 110 as described above may be implemented by a hardware device such as a computer system executing instructions. The neural network 110 may include, for example, a fully connected network, a deep convolutional network, and/or a recurrent neural network. The neural network 110 may be used in various fields such as object recognition, speech recognition, machine translation, pattern recognition, and/or computer vision.
The neural network 110 may use continual learning techniques to process various tasks. For example, among the continual learning techniques, expandable continual learning techniques may include a progressive neural network (PGN), reinforced continual learning (RCL), a dynamically expandable network (DEN), and the like. In general, continual learning may be an online multi-task learning method, and may be a technique for obtaining a single model capable of finally performing various tasks in an environment where new data and new tasks are sequentially given. A typical continual learning technique may perform inference for many tasks using a single model but may have an issue of catastrophic forgetting, the tendency of a model to forget knowledge learned for earlier tasks as it learns on new tasks. Further, as the number of tasks learned by the typical continual learning technique increases, the memory and/or processing power cost for effective training increases rapidly.
According to one or more embodiments, the occurrence of catastrophic forgetting described above may be effectively prevented by decomposing a model parameter (e.g., a connection weight) into a shared parameter σ 120 and an adaptive parameter τ1:t 140 at each layer included in the neural network 110 and retroactively training an adaptive parameter of a previous task as a new task is trained. The shared parameter σ 120 may be a parameter shared by a plurality of tasks T1 to T5 and may include generic knowledge about the plurality of tasks.
The adaptive parameter τ1:t 140 may be knowledge about each task that is not expressed by the shared parameter τ 120. Through maximized utilization of the shared parameter τ 120 during training, the adaptive parameter τ1:t 140 may be determined sparsely, which may effectively suppress a radical increase in the size of the neural network 110 caused by an increase in the number of tasks. An adaptive mask M1:t 130 may correspond to an attention for accessing only related knowledge in the shared parameter for processing a corresponding task.
Hereinafter, non-limiting examples of the continual learning of one or more embodiments will be described in further detail.
Referring to
In continual learning, a plurality of tasks {T1, . . . , TT} may be used in a random order for training a neural network. A dataset of a t-th task may be denoted as Dt={xti, yyi}i=1N
To minimize the catastrophic forgetting described above and the increase in the size of the neural network caused by the increase in the number of tasks to be learned, a training apparatus of one or more embodiments may decompose a model parameter θ of the neural network into a task-shared parameter a and a task-adaptive parameter matrix τ. That is, a model parameter for the t-th task may be expressed by θt=σ⊗Mt+τt. In this example, ⊗ denotes a vector-wise multiplication, and Mt (e.g., a task-adaptive mask) may act as an attention for focusing only on the parts relevant for the corresponding task in the task-shared parameter σ. In summary, parameters used in continual learning may include a task-shared parameter σ∈N'M a task-adaptive parameter τ∈N×M, and a task-adaptive mask m∈M.
This parameter decomposition may allow easy control of the trade-off between semantic drift and predictive performance of a new task by imposing separate regularizations on decomposed parameters. For example, when training for a new task is initiated, the shared parameters a determined for the previous task may be properly updated and induced not to deviate far from the previous shared parameter σ(t−1). At the same time, the capacity of the adaptive parameter τt may be induced to be as small as possible, by making the adaptive parameter τt sparse.
In operation 210, a training apparatus may determine whether a current task to be learned corresponds to a new task. When the current task is a new task that has not been learned previously, operation 220 may be performed. Conversely, when the current task is a task being learned, operation 230 may be performed.
In operation 220, the training apparatus may determine an adaptive parameter τt and an adaptive mask Mt for the current task. For example, the adaptive parameter τt may be determined to be the same as the shared parameter a trained for a previous task. Further, the adaptive mask Mt may be determined at random.
In operation 230, the training apparatus may determine a model parameter θtl for the current task. For the current task t, the model parameter may be determined by θt=σ⊗t+τt. In this way, the training apparatus may determine the model parameter θtl by applying the adaptive mask Mt to the shared parameter a and then adding the adaptive parameter τt thereto.
In operation 240, the training apparatus may train the model parameter θtl and an adaptive parameter τ1:t−1 of the previous task with respect to the current task. Training may be performed based on an objective function expressed by Equation 1 below, for example. Through training performed based on a single objective function, a fast training speed may be achieved.
In Equation 1, denotes a loss function applied to a neural network. denotes an element-wise L1 norm defined on a matrix. λ1 and λ2 denote hyperparameters balancing the efficiency of catastrophic forgetting. For example, the training apparatus may use L2 transfer regularization to prevent catastrophic forgetting but may also use other types of regularizations such as elastic weight consolidation (EWC). For example, the adaptive mask Mt may correspond to a sigmoid function of a trainable parameter vt applied to output channels or nodes of the shared parameter σ in each layer. As described above, a model that decomposes a model parameter into a shared parameter and an adaptive parameter may be referred to as an additive parameter decomposition (APD) model.
In Equation 1, the first term ({σ⊗t+τt}; t) may reflect configuring a model for the current task t and training the model with a training dataset Dt. For example, the model may be trained to minimize a loss between output labels of training data and inference data obtained when an input instance included in training data for the current task is input into the corresponding model.
In Equation 1, the second term
is a penalty term that makes the adaptive parameter τ sparse, thereby pruning the adaptive parameter τ. Through this, even when the number of tasks to be learned increases, it is possible to effectively suppress an increase in the parameter size.
In Equation 1, the third term
may be for maintaining the original solutions learned for the previous task even when the shared parameter for the current task is trained and updated. A model parameter for the previous task (for example, a (t−1)-th task) is expressed by θt−1=σ⊗Mt−1+τt−1, wherein the task-shared parameter a may be properly updated when learning the current task (for example, a t-th task) is initiated. As a result, the model parameter θt−1 of the previous task is not maintained to be constant but changed. Thus, by reflecting the task-shared parameter a updated through training to the adaptive parameter τt−1 of the previous task in operation 240, the model parameter θt−1 of the previous task may be maintained to be constant. In Equation 1, the third term may be such a penalty term.
θ*i denotes a model parameter trained and determined for an i-th task. Here, i is less than t and denotes an i-th previous task. When a new t-th task is learned, model parameters θ*i of previous tasks may be all recovered through Equation 2 below, for example. θ*i may be fixed without being updated during training.
(θi for task i<t): θ*i=σ(t−1)⊗i(t−1)+τi(t−1) Equation 2:
Further, σ⊗i+τi may be updated such that θ*i is constrained to be as close to τ1:t−1 as possible (see the last term of Equation 1).
As such, retroactive learning of adaptive parameters τ1:t−1 of previous tasks may be performed at the parameter level without generating a separate model and without a training dataset. Through this, the training apparatus of one or more embodiments may effectively prevent parameter-level drift and catastrophic forgetting, and may generate a trained model with a high degree of order-robustness in task learning.
In operation 250, the training apparatus may determine whether a predetermined number of (for example, s) new tasks are performed. This is for hierarchical knowledge consolidation which is described later. When the predetermined number (for example, s) new tasks are yet to be learned, operation 210 may be performed again. Conversely, when the predetermined number (for example, s) new tasks are learned, operation 260 may be performed.
In operation 260, the training apparatus may perform hierarchical knowledge consolidation on adaptive parameters, thereby generating the adaptive parameter into a locally shared parameter {tilde over (σ)}g and a second adaptive parameter J for the corresponding adaptive parameter. Examples of hierarchical knowledge consolidation will be described in further detail below with reference to
Referring to
As shown in
In continual learning, information on a predetermined task may be selectively removed due to the structural characteristics that there exists separately an adaptive parameter for each task. For example, when there is a corresponding task that is no longer needed during training or that hinders learning other major tasks, an adaptive parameter of the corresponding task may be deleted without affecting the performance for the remaining tasks, whereby information on the corresponding task may be easily removed. Through this, the training apparatus of one or more embodiments may achieve efficient training and storage space management. For example, when a predetermined product is discontinued, a task of recognizing and classifying the product may no longer be necessary. Thus, by deleting an adaptive parameter which is training information for the task, the training apparatus of one or more embodiments may efficiently manage the model and maintain the performance for other tasks. This makes advantages in lifetime learning scenarios.
Referring to
The plurality of tasks may be related to similar targets to be recognized. For example, a first task T1 of recognizing a sedan and a third task T3 of recognizing a truck are partially similar in that targets to be recognized are vehicles. Further, a second task T2 of recognizing a guitar and a fifth task T5 of recognizing a violin are partially similar in that targets to be recognized are musical instruments. As such, similar tasks may have redundancy of information in adaptive parameters due to their characteristics. Setting the redundancy of information as a locally shared parameter {tilde over (σ)}g may make the adaptive parameters τ1:t sparser. It may be verified that when compared to the adaptive parameters in a case where there is no locally shared parameter as shown on the left side of
Referring to
In operation 510, a training apparatus may generate a plurality of centroids based on a plurality of adaptive parameters for a plurality of tasks. In operation 520, the training apparatus may group the plurality of adaptive parameters into a plurality of groups. In this case, K-means clustering may be used to group the adaptive parameters.
In operation 530, the training apparatus may decompose each of adaptive parameters grouped into the same group into a locally shared parameter {tilde over (σ)}g and a second adaptive parameter {τi}i∈gg for a corresponding task.
In summary, each time the s-th task is learned, K-means clustering may be performed on previously trained adaptive parameters {τi}i=1t to group the tasks into K groups {g}g=1K. In addition, each of the previously trained adaptive parameters in the same group may be decomposed into the locally shared parameter g and the second adaptive parameter {τi}i∈g for the corresponding task, as shown in Equation 3 below, for example.
In Equation 3, τi,j denotes a j-th element of an i-th adaptive parameter matrix, μg denotes the cluster centroid of a group g, and β denotes a threshold and may be set to a fairly small number. In other words, when the difference between the maximum and minimum values of j-th elements of adaptive parameters included in the same group is less than β which is a very small value, the values of the j-th elements of the adaptive parameters may be set to “0”, and the j-th element {tilde over (σ)}g,j of the locally shared parameter may be set to “μg,j”. Through this, the training apparatus of one or more embodiments may make adaptive parameters for individual tasks sparser by generating a locally shared parameter as redundant knowledge within the same group.
In an example, the hierarchical knowledge consolidation described above may be performed for every s-th task, and the centroids of the groups may be initialized each time. In addition, each time the hierarchical knowledge consolidation is performed, the number of groups may be increased by k, such that a total of K+k groups may be determined. This may properly increase the number of groups as the number of tasks to be learned increases.
When a locally shared parameter is utilized for hierarchical knowledge consolidation, the objective function may be expressed as shown in Equation 4 below, for example.
In Equation 4, it may be verified that an adaptive parameter {tilde over (τ)}i of an i-th task is decomposed into a locally shared parameter {tilde over (σ)}g and a sparser second adaptive parameter τi corresponding to the i-th task.
Referring to
Referring to
Referring to
The descriptions provided with reference to
Referring to
In operation 910, the data processing apparatus may select an adaptive parameter and an adaptive mask for a target task to be performed among a plurality of tasks. For example, in response to a request for inference about a t-th task, the data processing apparatus may select an adaptive parameter and an adaptive mask for the t-th task, and a shared parameter from among parameters stored in a memory.
In operation 920, the data processing apparatus may determine a model for the target task based on the adaptive parameter, the adaptive mask, and a shared parameter for the plurality of tasks. For example, the data processing apparatus may determine a parameter of the model for performing the t-th task to be θt=σ⊗t+τt. In this way, the data processing apparatus may determine the model parameter τt by applying the adaptive mask Mt to the shared parameter σ and then adding the adaptive parameter θtl thereto.
In operation 930, the data processing apparatus may obtain output data from the model by inputting input data to be inferred into the determined model.
The descriptions provided with reference to
The training apparatus and the data processing apparatus described herein may be used in various fields such as image processing, object recognition, speech recognition, machine translation, machine interpretation, speech synthesis, and handwriting recognition, and may be applied to the design of continual learning-based large-scale artificial intelligence models. In addition, the training apparatus and the data processing apparatus may also be utilized when task-adaptive modeling is required in linear learning or deep learning networks.
Referring to
The storage device 1020 may store information or data to be used for a processing operation of the training apparatus 1000. For example, the storage device 1020 may store training data used for training a neural network. Further, the storage device 1020 may store instructions to be executed by the processor 1010. The storage device 1020 may include computer-readable storage media, such as a random-access memory (RAM), a dynamic random-access memory (DRAM), a static random-access memory (SRAM), a magnetic hard disk, an optical disk, a flash memory, and an electrically programmable read-only memory (EPROM), or other types of computer-readable storage media known in the art.
The processor 1010 may control overall operations of the training apparatus 1000 and executes functions and/or instructions to be executed within the training apparatus 1000. The processor 1010 may perform a process of training a neural network based on training data, and perform the one or more operations described above in relation to the training process.
In an example, the processor 1010 may determine an adaptive parameter and an adaptive mask for a current task to be learned, determine a model parameter for the current task based on the adaptive parameter, the adaptive mask, and a shared parameter for a plurality of tasks, and train the model parameter and an adaptive parameter of a previous task with respect to the current task. The adaptive parameter of the previous task and the shared parameter are trained with respect to the previous task.
Referring to
The storage device 1120 may store information or data necessary for a processing operation of the data processing apparatus 1100. For example, the data processing apparatus 1100 may store input data that is a subject of data processing. Further, the storage device 1120 may store instructions to be executed by the processor 1110. The storage device 1120 may include computer-readable storage media, such as a RAM, a DRAM, a SRAM, a magnetic hard disk, an optical disk, a flash memory, and an EPROM, or other types of computer-readable storage media known in the art.
The processor 1110 may control overall operations of the data processing apparatus 1100 and execute functions and/or instructions to be executed within the data processing apparatus 1100. The data processing apparatus 1100 may include one or more processors 1110, and the processor 1110 may include, for example, a neural processing unit (NPU), a graphics processing unit (GPU), a tensor processing unit (TPU), and the like. The processor 1110 may perform a process of processing the input data using the neural network, and perform the one or more operations described above in relation to the corresponding process.
In an example, the processor 1110 may select an adaptive parameter and an adaptive mask for a target task to be performed among a plurality of tasks, determine a model for the target task based on based on the adaptive parameter, the adaptive mask, and a shared parameter for the plurality of tasks, and obtain output data from the model by inputting input data to be inferred into the determined model.
The sensor 1130 may include one or more sensors. For example, the sensor 1130 may include an image sensor, a speech sensor, a radar sensor, and a measurement sensor. Image data, speech data, or radar data acquired by the sensor 1130 may be used as the input data described above.
The input device 1140 may receive a user input from a user. The input device 1060 may include, for example, a keyboard, a mouse, a touch screen, a microphone, or any other device that detects the input from the user and transmits the detected input.
The output device 1150 may provide an output of the data processing apparatus 1100 to the user through a visual, auditory, or tactile method. The output device 1150 may include, for example, a display, a speaker, a lighting device, a haptic device, or any other device that provides the output to the user.
The communication device 1160 may communicate with an external device through a wired or wireless network. For example, the communication device 1160 may communicate with other external devices using a wired communication method or a wireless communication method such as Bluetooth, Wireless Fidelity (Wi-Fi), Third Generation (3G), Long-Term Evolution (LTE), or the like.
The training apparatuses, processors, storage devices, data processing apparatuses, memories, sensors, input devices, output devices, communication devices, training apparatus 1000, processor 1010, storage device 1020, data processing apparatus 1100, processor 1110, memory 1120, sensor 1130, input device 1140, output device 1150, communication device 1160, and other apparatuses, devices, units, modules, and components described herein with respect to
The methods illustrated in
Instructions or software to control computing hardware, for example, one or more processors or computers, to implement the hardware components and perform the methods as described above may be written as computer programs, code segments, instructions or any combination thereof, for individually or collectively instructing or configuring the one or more processors or computers to operate as a machine or special-purpose computer to perform the operations that are performed by the hardware components and the methods as described above. In one example, the instructions or software include machine code that is directly executed by the one or more processors or computers, such as machine code produced by a compiler. In another example, the instructions or software includes higher-level code that is executed by the one or more processors or computer using an interpreter. The instructions or software may be written using any programming language based on the block diagrams and the flow charts illustrated in the drawings and the corresponding descriptions used herein, which disclose algorithms for performing the operations that are performed by the hardware components and the methods as described above.
The instructions or software to control computing hardware, for example, one or more processors or computers, to implement the hardware components and perform the methods as described above, and any associated data, data files, and data structures, may be recorded, stored, or fixed in or on one or more non-transitory computer-readable storage media. Examples of a non-transitory computer-readable storage medium include read-only memory (ROM), random-access programmable read only memory (PROM), electrically erasable programmable read-only memory (EEPROM), random-access memory (RAM), dynamic random access memory (DRAM), static random access memory (SRAM), flash memory, non-volatile memory, CD-ROMs, CD-Rs, CD+Rs, CD-RWs, CD+RWs, DVD-ROMs, DVD-Rs, DVD+Rs, DVD-RWs, DVD+RWs, DVD-RAMs, BD-ROMs, BD-Rs, BD-R LTHs, BD-REs, blue-ray or optical disk storage, hard disk drive (HDD), solid state drive (SSD), flash memory, a card type memory such as multimedia card micro or a card (for example, secure digital (SD) or extreme digital (XD)), magnetic tapes, floppy disks, magneto-optical data storage devices, optical data storage devices, hard disks, solid-state disks, and any other device that is configured to store the instructions or software and any associated data, data files, and data structures in a non-transitory manner and provide the instructions or software and any associated data, data files, and data structures to one or more processors or computers so that the one or more processors or computers can execute the instructions. In one example, the instructions or software and any associated data, data files, and data structures are distributed over network-coupled computer systems so that the instructions and software and any associated data, data files, and data structures are stored, accessed, and executed in a distributed fashion by the one or more processors or computers.
While this disclosure includes specific examples, it will be apparent after an understanding of the disclosure of this application that various changes in form and details may be made in these examples without departing from the spirit and scope of the claims and their equivalents. The examples described herein are to be considered in a descriptive sense only, and not for purposes of limitation. Descriptions of features or aspects in each example are to be considered as being applicable to similar features or aspects in other examples. Suitable results may be achieved if the described techniques are performed in a different order, and/or if components in a described system, architecture, device, or circuit are combined in a different manner, and/or replaced or supplemented by other components or their equivalents.
Number | Date | Country | Kind |
---|---|---|---|
10-2020-0104036 | Aug 2020 | KR | national |
This application claims the benefit under 35 USC 119(e) of U.S. Provisional Application No. 62/976,528 filed on Feb. 14, 2020, and the benefit under 35 USC 119(a) of Korean Patent Application No. 10-2020-0104036 filed on Aug. 19, 2020, in the Korean Intellectual Property Office, the entire disclosures of which are incorporated herein by reference for all purposes.
Number | Date | Country | |
---|---|---|---|
62976528 | Feb 2020 | US |