This application claims the benefit under 35 USC § 119(a) of Korean Patent Application No. 10-2022-0003434, filed on Jan. 10, 2022, in the Korean Intellectual Property Office, the entire disclosure of which is incorporated herein by reference for all purposes.
The following description relates to a method and apparatus with object recognition.
Technological automation or recognition processes have been performed through the implementation of a neural network aby a processor as a special computing structure, to provide intuitive mapping for computation between an input and an output after considerable training. An ability to be trained to generate such mapping may be referred to as a learning ability of a neural network. Furthermore, a neural network trained and specialized through special training has, for example, a generalization ability to provide a relatively accurate output with respect to an untrained input pattern.
In one general aspect, a processor-implemented method includes extracting feature maps including local feature representations from an input image, generating a global feature representation corresponding to the input image by fusing the local feature representations, and performing a recognition task on the input image based on the local feature representations and the global feature representation.
The generating of the global feature representation may include fusing pooling results corresponding to the local feature representations.
The pooling may include global average pooling.
The determining of the global feature representation may include performing an attention mechanism using query data pre-trained in association with the recognition task.
The determining of the global feature representation by performing the attention mechanism may include determining key data and value data corresponding to the local feature representations, determining a weighted sum of the value data based on similarity between the key data and the query data, and determining the global feature representation based on the weighted sum.
The performing of the recognition task may include estimating a first recognition result corresponding to the local feature representations, using a first recognition model, and estimating a second recognition result corresponding to the global feature representation, using a second recognition model.
The first recognition model may include an object detection model configured to estimate a detection result from the local feature representations, and the second recognition model may include a classification model configured to estimate a classification result from the global feature representation.
The detection result may include one or more of bounding box information, objectness information, or class information, and the classification result may include one or more of multi-class classification information, context classification information, or object count information.
The method may further include training the first model and training the second model, wherein the training the first recognition model may affect the training of the second recognition model, and the training of the second recognition model may affect the training of the first recognition model.
The training of the first model and the training of the second model may include using an in-training feature extraction model, or a trained feature extraction model, to extract training feature maps including in-training local feature representations from a training input image, using an in-training feature fusion model, or a trained fusion model, to determine a training global feature representation corresponding to the training input image by fusing the training local feature representations, using an in-training first recognition model to estimate a training first recognition result corresponding to the training local feature representations, using an in-training second recognition model to estimate a training second recognition result corresponding to the training global feature representation, and generating the first model and the second model by training the in-training first recognition model and the in-training second recognition model together based on the training first recognition result and the training second recognition result.
The performing of the recognition task may further include determining a task result recognized by the recognition task by fusing the first recognition result and the second recognition result.
The recognition task may correspond to one of plural task candidates, the task candidates having respectively associated pre-trained query data items, and the determining of the global feature representation may further includes selecting, from among the pre-trained query data items, a query data item associated with the recognition task, and determining the global feature representation by performing an attention mechanism based on the selected query data item.
The method may further include capturing the input image using a camera.
In one general aspect, a processor-implemented method includes using a feature extraction model to extract feature maps including local feature representations from an input image, using a feature fusion model to determine a global feature representation corresponding to the input image by fusing the local feature representations, using a first recognition model to estimate a first recognition result corresponding to the local feature representations, using a second recognition model to estimate a second recognition result corresponding to the global feature representation, and based on the first recognition result and the second recognition result, training one or more of the feature extraction model, the feature fusion model, the first recognition model, or the second recognition model.
The method may further include determining a training loss based on the first recognition result and the second recognition result, wherein the training is based on the training loss.
The first recognition model and the second recognition model may be trained as an integrated model such that each affect training of the other.
The second recognition model may include a plurality of classification models respectively corresponding to task candidates, the feature fusion model may be configured to determine the global feature representation by performing an attention mechanism based on query data corresponding to a current task candidate among the task candidates, and the determining of the training loss may include determining the training loss by applying a classification result of a classification model corresponding to the current task candidate among the task candidates as the second recognition result.
Training the first recognition model may affect training the second recognition model and training the second recognition model may affect training the first recognition model.
In one general aspect, an electronic apparatus includes a processor configured to extract feature maps including respective local feature representations from an input image, determine a global feature representation corresponding to the input image by fusing the local feature representations, and perform a recognition task on the input image based on the local feature representations and the global feature representation.
The electronic apparatus may further include a camera configured to generate the input image.
The processor may be further configured to determine the global feature representation by fusing pooling results corresponding to the local feature representations.
The processor may be further configured to determine the global feature representation by performing an attention mechanism using query data pre-trained in response to the recognition task.
The attention mechanism may include a vision transformer model that performs the fusing based on similarity of keys and values of the query data.
The processor may be further configured to use a first recognition model to estimate a first recognition result corresponding to the local feature representations, and use a second recognition model to estimate a second recognition result corresponding to the global feature representation.
The processor may be further configured to select between the first recognition model and the second recognition model based on the recognition task.
The first recognition model may include an object detection model configured to estimate a detection result corresponding to each of the local feature representations, and the second recognition model may include a classification model configured to estimate a classification result corresponding to the global feature representation.
The processor may be further configured to determine a task result of the recognition task by fusing the first recognition result and the second recognition result.
In one general aspect, a method includes generating feature maps from an input image, wherein the generating of the feature maps is performed by one or more layers of a neural network, forming a feature pyramid from the feature maps, providing the feature pyramid to a first model that outputs an object recognition prediction based on the feature pyramid, extracting global features of the image from the feature pyramid, and providing the global features to another one or more layers of the neural network to generate a scene recognition prediction based on the global features.
The neural network may include a convolutional neural network.
The extracting global features includes either global average pooling of the feature maps or performing attention-based vision transformation of the feature maps.
The scene recognition prediction includes a scene classification.
The method may further include generating a scene result from the object recognition prediction, and generating a final scene prediction by fusing the scene result with the scene recognition prediction.
The neural network may include one or more additional layers as the first model.
In one general aspect a non-transitory computer-readable storage medium stores instructions that, when executed by a processor, configure the processor to perform any of the methods.
Throughout the drawings and the detailed description, unless otherwise described or provided, the same drawing reference numerals will be understood to refer to the same or like elements, features, and structures. The drawings may not be to scale, and the relative size, proportions, and depiction of elements in the drawings may be exaggerated for clarity, illustration, and convenience.
The following detailed description is provided to assist the reader in gaining a comprehensive understanding of the methods, apparatuses, and/or systems described herein. However, various changes, modifications, and equivalents of the methods, apparatuses, and/or systems described herein will be apparent after an understanding of the disclosure of this application. For example, the sequences of operations described herein are merely examples, and are not limited to those set forth herein, but may be changed as will be apparent after an understanding of the disclosure of this application, with the exception of operations necessarily occurring in a certain order. Also, descriptions of features that are known after an understanding of the disclosure of this application may be omitted for increased clarity and conciseness.
The features described herein may be embodied in different forms and are not to be construed as being limited to the examples described herein. Rather, the examples described herein have been provided merely to illustrate some of the many possible ways of implementing the methods, apparatuses, and/or systems described herein that will be apparent after an understanding of the disclosure of this application.
The terminology used herein is for describing various examples only and is not to be used to limit the disclosure. The articles “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. As used herein, the term “and/or” includes any one and any combination of any two or more of the associated listed items. As non-limiting examples, terms “comprise” or “comprises,” “include” or “includes,” and “have” or “has” specify the presence of stated features, numbers, operations, members, elements, and/or combinations thereof, but do not preclude the presence or addition of one or more other features, numbers, operations, members, elements, and/or combinations thereof.
Throughout the specification, when a component or element is described as being “connected to,” “coupled to,” or “joined to” another component or element, it may be directly “connected to,” “coupled to,” or “joined to” the other component or element, or there may reasonably be one or more other components or elements intervening therebetween. When a component or element is described as being “directly connected to,” “directly coupled to,” or “directly joined to” another component or element, there can be no other elements intervening therebetween. Likewise, expressions, for example, “between” and “immediately between” and “adjacent to” and “immediately adjacent to” may also be construed as described in the foregoing.
Although terms such as “first,” “second,” and “third”, or A, B, (a), (b), and the like may be used herein to describe various members, components, regions, layers, or sections, these members, components, regions, layers, or sections are not to be limited by these terms. Each of these terminologies is not used to define an essence, order, or sequence of corresponding members, components, regions, layers, or sections, for example, but used merely to distinguish the corresponding members, components, regions, layers, or sections from other members, components, regions, layers, or sections. Thus, a first member, component, region, layer, or section referred to in the examples described herein may also be referred to as a second member, component, region, layer, or section without departing from the teachings of the examples.
Unless otherwise defined, all terms, including technical and scientific terms, used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure pertains and based on an understanding of the disclosure of the present application. Terms, such as those defined in commonly used dictionaries, are to be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and the disclosure of the present application and are not to be interpreted in an idealized or overly formal sense unless expressly so defined herein. The use of the term “may” herein with respect to an example or embodiment, e.g., as to what an example or embodiment may include or implement, means that at least one example or embodiment exists where such a feature is included or implemented, while all examples are not limited thereto.
A DNN may include a fully connected network (FCN) portion, a convolutional neural network (CNN) portion, and/or a recurrent neural network (RNN) portion. For example, at least a portion of the layers included in the DNN may correspond to a CNN, and another portion of the layers may correspond to an FCN. The CNN may be referred to as a convolutional layer (i.e., one or more convolutional layers), and the FCN may be referred to as a fully connected layer. In the case of the CNN, data input to each layer may be referred to as an input feature map (e.g., a tensor map, a 2D map, an ID vector, etc.), and data output from each layer may be referred to as an output feature map. The input feature map and the output feature map (e.g., a tensor map, a 2D map, an ID vector, etc.) may also be referred to as activation data, where the input feature map may be the activation results of a previous layer input to the current layer (or subsequent pooling and/or normalization layer(s) that provided the activation results from the previous layer), and where the output layer may be the activation results of the current layer (where subsequent pooling and/or normalization layers provided the activation result of the current layer). When a convolutional layer corresponds to an input layer, an input feature map of the input layer may be an input image.
After being trained based on deep learning, the neural network may perform an inference, according to its training, by mapping input data to output data, which may have a nonlinear relationship to each other. The deep learning may be a machine learning scheme for image or voice (as non-limiting examples) recognition from a large training data set, for example. The deep learning may be construed as an optimized problem-solving process of finding a point at which energy (e.g., loss or error) is minimized while training the neural network using training data (e.g., prepared training data).
Through supervised or unsupervised learning for example, a structure of the neural network, e.g., resultant parameters within a model, may be obtained, and the input data and the output data may be mapped to each other through the parameters. The parameters may include connection weights e.g., within or between layers. When a width and a depth of the neural network are sufficiently large, the neural network may have a capacity sufficient to implement an arbitrary trained function. The performance of the neural network may increase with the amount of training data that it is trained with.
The neural network may be trained in advance, that is, before the neural network starts performing inferences. The neural network may be said to start when the neural network is ready for performing inferences. For example, starting the neural network may include loading the neural network into a memory and/or loading input data (for inference) to the neural network after the neural network is loaded into the memory.
The feature extraction model 110 may extract feature maps from the input image 101. The feature maps may be referred to as a feature map set. The feature extraction model 110 may include at least one convolutional layer and at least one pooling layer. The feature extraction model 110 may generate the feature map set using the at least one convolutional layer and the at least one pooling layer.
The feature map set may be divided into a plurality of feature representations. For example, the feature map set may have a dimension of W*H*C (width, height, channel) and may include W*H feature representations. In this case, a feature representation may correspond to a pixel vector. Each feature representation may include local information of a specific region of the input image 101 and may be referred to as a local feature representation. For example, the input image 101 may be divided into grid cells, and each local feature representation may correspond to a specific grid cell of the input image 101. However, a corresponding region of each local feature representation may be represented by the feature extraction model 110 in various ways.
The first recognition model 120 may estimate the first recognition result 102. The first recognition result 102 may include a detection result. The first recognition model 120 may estimate detection results, where each detection result corresponds to each of the local feature representations, respectively. For example, a local feature representation for a detection result may be a vector, and the first recognition model 120 may determine vector values corresponding to the detection result. In this example, the detection result may include bounding box information, objectness information (e.g., probability an object exists), and/or class information, corresponding to each of the local feature representations. The detection result may include any information representing an object. The first recognition model 120 may determine the first recognition result 102 by integrating detection results of the local feature representations.
The first recognition result 102 may be based on local information at the level of a grid cell (e.g., pixel level) and therefore may be vulnerable to unseen data (e.g., objects or object classes) during training. The feature fusion model 130 may derive global feature representations from the local feature representations. The feature fusion model 130 may determine a global feature representation corresponding to the input image 101 by fusing (e.g., weighted sums) the local feature representations, and the second recognition model 140 may estimate the second recognition result 103 corresponding to the global feature representation. One or more of the feature extraction model 110, the first recognition model 120, the feature fusion model 130, and the second recognition model 140 may be trained as an integrated model, where the training of one or more of the models influences the training of one or more other models, or where all models are trained together. Recognition performance may be improved as local information and global information are applied together when training the integrated model. In addition, an orientation to a specific recognition task may be formed based on a structure of the feature fusion model 130 and the second recognition model 140 and/or a design of the training process.
In an example, the feature fusion model 130 may generate pooling results corresponding to the local feature representations by performing a pooling operation (e.g., a global average pooling operation) and the feature fusion model 130 may determine the global feature representation by fusing the pooling results. In another example, the feature fusion model 130 may determine the global feature representation by performing an attention mechanism (e.g., using a vision transformer model) using pre-trained query data. For example, the feature fusion model 130 may include a vision transformer. Based on fusion of the local feature representations, the global feature representation may include global information representing the local feature representations, and recognition performance may be improved because the global information is applied to the recognition process.
The second recognition model 140 may estimate the second recognition result 103 corresponding to the global feature representation. The second recognition result 103 may include information corresponding to a current target task (e.g., scene understanding for the input image 101). The second recognition result 103 may include a classification result. For example, the classification result may be one or more of multi-class classification information (e.g., probabilities of the presence of multiple classes of objects), context classification information (e.g., categorizing a scene), and/or object count information (e.g., probabilities of respective numbers of objects). Other classification information suitable for various purposes of a specific task may be set, and the classification result may include such classification information.
A training loss may be designed such that a result of a specific target recognition task is represented as the second recognition result 103, and the feature extraction model 110, the first recognition model 120, the feature fusion model 130, and the second recognition model 140 may be trained using the training loss. In a training process, the second recognition model 140 may generate the second recognition result 103 in a way that reduces the training loss. Similarly, the feature fusion model 130 may be trained to fuse the local feature representations and generate the global feature representation in a way that reduces the training loss. In addition, the feature extraction model 110 may be trained to generate the local feature representations in a way that reduces the training loss. The first recognition model 120 may be trained based on corresponding local feature representations. Accordingly, the feature extraction model 110, the first recognition model 120, and the feature fusion model 130 may have a task-oriented feature corresponding to the target recognition task.
The detection result 311 may include bounding box information 312, objectness information 313, and/or class information 314, corresponding to the local feature representation 302. The detection result 311 may include other information related to an object. The bounding box information 312 may include position information and size information of a bounding box corresponding to the object. The objectness information 313 may indicate whether the local feature expression 302 corresponds to the object. The class information 314 may represent a class of the object. The objectness information 313 may be omitted.
The multi-class classification information 721 may indicate whether the input image 701 has objects of a plurality of classes of interest. For example, the multi-class classification information 721 may include a probability that the input image 701 has an object of each class of interest. The vision transformer model 710 and the classification model 720 may be trained such that a training loss of the multi-class classification information 721 is reduced. More specifically, the query data 713 may be trained to derive the global feature representation 714 that reduces the training loss, and the classification model 720 may be trained to estimate the multi-class classification information 721 that reduces the training loss.
In a process of training the vision transformer model 710 and the classification model 720, a feature extraction model (e.g., the feature extraction model 110 of
Referring to the example of
A target recognition task of
Referring to the example of
First query data 813 and a first classification model 820 may be trained with training data corresponding to the first task candidate, and second query data 814 and a second classification model 830 may be trained with training data corresponding to the second task candidate. Accordingly, in a training stage, the vision transformer model 810 may be trained to determine a first global feature representation 815 specific to the first task candidate, based on the first query data 813, and the first classification model 820 may be trained to estimate the first count information 821 optimized for the first task candidate, based on the first global feature representation 815.
When the first task candidate is selected as a current recognition task for an inference stage, the vision transformer model 810 may determine the first global feature representation 815 based on the key data 811, the value data 812, and the first query data 813. Continuing with the inference stage, the first classification model 820 may estimate the first count information 821 from the first global feature representation 815. When the second task candidate is selected as a current recognition task, the vision transformer model 810 may determine a second global feature representation 816, based on the key data 811, the value data 812, and the second query data 814. And, continuing with the current interference/recognition task, the second classification model 830 may estimate the second count information 821 from the second global feature representation 816.
Another example is illustrated in
When the first task candidate is selected as a current recognition task for an inference stage, a vision transformer model 840 may determine a first global feature representation 845 based on key data 841, value data 842, and the first query data 843. Continuing with the current recognition task, the first classification model 850 may estimate the context classification information 851 corresponding to the first global feature representation 845. When the second task candidate is selected as a current recognition task, the vision transformer model 840 and the second classification model 860 may estimate the multi-class classification information 861 based on the second query data 844 and a second global feature representation 846.
The first recognition result 902 and the second recognition result 903 may be fused by a fusion block 960, and accordingly, a task result 904 of a current recognition task is determined. The first recognition result 902 may be post-processed by a post-processing block 950 before being fused with the second recognition result 903. For example, the first recognition result 902 may include an object detection result, and the second recognition result 903 may include a scene classification result. In this case, scene classification may be performed through post-processing of the object detection result, a scene classification result of the first recognition result 902 may be fused with the scene classification result of the second recognition result 903, and the task result 904 may be determined accordingly.
Referring to the example of
The processor 1110 may execute instructions (e.g., stored in a memory) to perform any one, any combination, or all of the operations described herein with reference to any one, any combination of, or all of
The processor 1210 may execute instructions or functions to be executed by the electronic apparatus 1200. For example, the processor 1210 may process the instructions stored in the memory 1220 or the storage device 1240. The processor 1210 may be configured to perform any of the operations described with reference to
The camera 1230 may capture a photo and/or a video. For example, the camera 1230 may generate any of the input images mentioned herein. The storage device 1240 may include a computer-readable storage medium or a computer-readable storage device. The storage device 1240 may persistently store a greater amount of information than the memory 1220. For example, the storage device 1240 may include a magnetic hard disk, an optical disc, a flash memory, a floppy disk, or other non-volatile memories.
The input device 1250 may provide an input, e.g., when manipulated by a user. For example, the input may be a keyboard and/or a mouse input, a touch input, a voice input, an image input (other than through the capturing of the camera 1230), etc. For example, the input device 1250 may include a keyboard, a mouse, a touch screen, a microphone, or any other device that detects manipulation by the user and provides the corresponding input to the electronic device 1200. The output device 1260 may provide an output of the electronic device 1200 to the user through a visual, auditory, and/or haptic rendering. The output device 1260 may include, for example, a display, a touch screen, a speaker, a vibration generator, and/or any other device that provides the output. The network interface 1270 may communicate with an external device through a wired and/or wireless network.
In operation 1320, the object recognition apparatus may determine a global feature representation corresponding to the input image by fusing the local feature representations. In an example, the object recognition apparatus may determine the global feature representation by fusing pooling results respectively corresponding to the local feature representations. In another example, the object recognition apparatus may determine the global feature representation by performing an attention mechanism of a transformer using transformer query data pre-trained for, and selected in response to, a recognition task. The object recognition apparatus may determine transformer key data and transformer value data corresponding to the local feature representations, determine a weighted sum of the transformer value data based on similarity between the transformer key data and the transformer query data, and determine the global feature representation based on the weighted sum.
In operation 1330, the object recognition apparatus may perform the recognition task on the input image based on the local feature representations and the global feature representation. The object recognition apparatus may estimate a first recognition result corresponding to the local feature representations by using a first recognition model and may estimate a second recognition result corresponding to the global feature representation by using a second recognition model. The first recognition model may be an object detection model for estimating a detection result corresponding to each of the local feature representations, and the second recognition model may be a classification model for estimating a classification result corresponding to the global feature representation. The detection result may include bounding box information, objectness information, and/or class information (corresponding to each of the local feature representations) and the classification result may include multi-class classification information, context classification information, and/or object count information. The first recognition model and the second recognition model may be trained as an integrated model, where, during training, the training of one or more of the models affects the training of one or more other of the models.
The object recognition apparatus may determine a task result of the recognition task by fusing the first recognition result and the second recognition result. The recognition task may correspond to one of various task candidates, and the object recognition apparatus may determine the global feature representation by performing an attention mechanism based on query data corresponding to the recognition task (the query data may be selected from among a series of query data pre-trained for the respective task candidates). In addition, the description provided with reference to
In operation 1440, the object recognition apparatus may use a second recognition model to estimate a second recognition result corresponding to the global feature representation. In operation 1450, the object recognition apparatus may determine a training loss based on the first recognition result and the second recognition result. The second recognition model may include a plurality of classification models corresponding to respective task candidates, the feature fusion model may determine the global feature representation by performing an attention mechanism based on query data corresponding to a current task candidate among the task candidates, and the object recognition apparatus may determine a training loss by applying a classification result of a classification model corresponding to the current task (the classification model selected from among the classification models) as the second recognition result.
In operation 1460, the object recognition apparatus may train the feature extraction model, the feature fusion model, the first recognition model, and/or the second recognition model, based on the training loss. The first recognition model and the second recognition model may be trained as an integrated model, the training of each affecting the training of the other. In addition, the description provided with reference to
The computing apparatuses, the vehicles, the electronic devices, the processors, the memories, the image sensors, the vehicle/operation function hardware, the ADAS/AD systems, the displays, the information output system and hardware, the storage devices, and other apparatuses, devices, units, modules, and components described herein with respect to
The methods illustrated in
Instructions or software to control computing hardware, for example, one or more processors or computers, to implement the hardware components and perform the methods as described above may be written as computer programs, code segments, instructions or any combination thereof, for individually or collectively instructing or configuring the one or more processors or computers to operate as a machine or special-purpose computer to perform the operations that are performed by the hardware components and the methods as described above. In one example, the instructions or software include machine code that is directly executed by the one or more processors or computers, such as machine code produced by a compiler. In another example, the instructions or software includes higher-level code that is executed by the one or more processors or computer using an interpreter. The instructions or software may be written using any programming language based on the block diagrams and the flow charts illustrated in the drawings and the corresponding descriptions herein, which disclose algorithms for performing the operations that are performed by the hardware components and the methods as described above.
The instructions or software to control computing hardware, for example, one or more processors or computers, to implement the hardware components and perform the methods as described above, and any associated data, data files, and data structures, may be recorded, stored, or fixed in or on one or more non-transitory computer-readable storage media. Examples of a non-transitory computer-readable storage medium include read-only memory (ROM), random-access programmable read only memory (PROM), electrically erasable programmable read-only memory (EEPROM), random-access memory (RAM), dynamic random access memory (DRAM), static random access memory (SRAM), flash memory, non-volatile memory, CD-ROMs, CD-Rs, CD+Rs, CD-RWs, CD+RWs, DVD-ROMs, DVD-Rs, DVD+Rs, DVD-RWs, DVD+RWs, DVD-RAMs, BD-ROMs, BD-Rs, BD-R LTHs, BD-Res, blue-ray or optical disk storage, hard disk drive (HDD), solid state drive (SSD), flash memory, a card type memory such as multimedia card micro or a card (for example, secure digital (SD) or extreme digital (XD)), magnetic tapes, floppy disks, magneto-optical data storage devices, optical data storage devices, hard disks, solid-state disks, and any other device that is configured to store the instructions or software and any associated data, data files, and data structures in a non-transitory manner and provide the instructions or software and any associated data, data files, and data structures to one or more processors or computers so that the one or more processors or computers can execute the instructions. In one example, the instructions or software and any associated data, data files, and data structures are distributed over network-coupled computer systems so that the instructions and software and any associated data, data files, and data structures are stored, accessed, and executed in a distributed fashion by the one or more processors or computers.
While this disclosure includes specific examples, it will be apparent after an understanding of the disclosure of this application that various changes in form and details may be made in these examples without departing from the spirit and scope of the claims and their equivalents. The examples described herein are to be considered in a descriptive sense only, and not for purposes of limitation. Descriptions of features or aspects in each example are to be considered as being applicable to similar features or aspects in other examples. Suitable results may be achieved if the described techniques are performed in a different order, and/or if components in a described system, architecture, device, or circuit are combined in a different manner, and/or replaced or supplemented by other components or their equivalents.
Therefore, in addition to the above disclosure, the scope of the disclosure may also be defined by the claims and their equivalents, and all variations within the scope of the claims and their equivalents are to be construed as being included in the disclosure.
Number | Date | Country | Kind |
---|---|---|---|
10-2022-0003434 | Jan 2022 | KR | national |