This application claims the benefit under 35 USC § 119(a) of Korean Patent Application No. 10-2021-0171979, filed on Dec. 3, 2021, in the Korean Intellectual Property Office, the entire disclosure of which is incorporated herein by reference for all purposes.
The following description relates to searching for an optimal architecture of a neural network.
A typical neural architecture search may be one method of automatically searching for a potentially optimal architecture of a neural network for a given purpose. Such a neural network search for a potentially suitable structure and shape of an architecture of a neural network depend on the capabilities of the search to resolve a given problem through deep learning. The potentially optimal neural network may be generated by selecting and combining primitive operations including predefined operators and functions, which is also called a search space. An example of an operator may include convolution, pooling, concatenation, skip connection, and the like.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
In one general aspect, there is provided a computing apparatus, the apparatus including a processor configured to generate a neural network loss based on parameters of a candidate architecture for the neural network, measure first hardware resources used in operation of the neural network with the candidate architecture, generate a prediction, using a hardware resource prediction model, of second hardware resources that would be used for operating the neural network with the candidate architecture, determine a hardware resource loss based on the first hardware resources and the second hardware resources, and determine a target architecture of the neural network based on the neural network loss and the hardware resource loss.
The hardware resource prediction model may include a neural network configured to accept the parameters of the candidate architecture as inputs, to predict the second hardware resource of the neural network of the candidate architecture based on the parameters, and to output a hardware resource prediction value.
The processor may be configured to determine the hardware resource loss based on a difference between the first hardware resource and the second hardware resource, and update the parameters of the candidate architecture to minimize the hardware resource loss.
The processor may be configured to determine a weighted sum of the neural network loss and the hardware resource loss as an optimization loss and to determine the target architecture to minimize the optimization loss.
The processor may be configured to determine the target architecture and target parameters to reduce the neural network loss and the hardware resource loss.
For each layer of the neural network, a corresponding candidate architecture may be determined by selecting a respective candidate operation from among candidate operations of a corresponding layer.
Information associated with the selected candidate operation may be input to the hardware resource prediction model.
The first hardware resources may include any one or any combination of a measured power consumption, a memory demand, a number of operations, and a processing time to operate the neural network with the candidate architecture.
The processor may be configured to determine an optimization loss comprising the neural network loss and the hardware resource loss, and determine the target architecture by selecting a target operation that minimizes the optimization loss from among candidate operations of each layer included in the neural network of the candidate architecture.
The processor may be configured to determine the neural network loss based on a difference between validation data and result data output by the neural network of the candidate architecture processing training data.
In another general aspect, there is provided a processor-implemented method for searching an optimal architecture of a neural network, the method including determining a neural network loss based on parameters of a candidate architecture for the neural network, measuring a first hardware resource needed to operate the neural network of the candidate architecture, predicting, using a hardware resource prediction module, a second hardware resource needed to operate the neural network of the candidate architecture, determining a hardware resource loss based on the first hardware resource and the second hardware resource, and determining a target architecture of the neural network based on the neural network loss and the hardware resource loss.
The hardware resource prediction module may include a neural network configured to accept the parameters of the candidate architecture as inputs, predict the second hardware resource of the neural network of the candidate architecture based on the parameters, and to output a hardware resource prediction value.
The determining of the target architecture may include determining the target architecture to minimize a weighted sum of the neural network loss and the hardware resource loss.
The determining of the target architecture may include determining the target architecture by selecting a target operation from among candidate operations of each layer included in the neural network of the candidate architecture.
The first and second hardware resource may include any one or any combination of a power consumption, a memory demand, a number of operations, and a processing time to operate the neural network of the candidate architecture.
In another general aspect, there is provided a processor-implemented method for identifying an architecture of a neural network, the method including determining a neural network loss based on parameters of a candidate architecture for the neural network, measuring first hardware resources used in operating the neural network with the candidate architecture, predicting, using a hardware resource prediction module, second hardware resources for the neural network with the candidate architecture, generating a hardware resource loss based on a difference between the first hardware resource and the second hardware resource, and selecting the candidate architecture as the target architecture for the neural network based on the neural network loss and the hardware resource loss, wherein the hardware resource prediction module may include a hardware resource prediction neural network trained to accept the parameters of the candidate architecture of the neural network as inputs and to output the second hardware resources.
The selecting of the target architecture of the neural network may include determining the target architecture of the neural network with a least sum of the neural network loss and a weight applied to the hardware resource loss.
The generating of the hardware resource loss may include generating the hardware resource loss based on applying a loss function that considers the difference between the first hardware resource and the second hardware resource.
The candidate architecture may include a neural network structure including a set of candidate operations for each layer of the neural network.
Other features and aspects will be apparent from the following detailed description, the drawings, and the claims.
Throughout the drawings and the detailed description, unless otherwise described or provided, the same drawing reference numerals will be understood to refer to the same elements, features, and structures. The drawings may not be to scale, and the relative size, proportions, and depiction of elements in the drawings may be exaggerated for clarity, illustration, and convenience.
The following detailed is provided to assist the reader in gaining a comprehensive understanding of the methods, apparatuses, and/or systems described herein. However, various changes, modifications, and equivalents of the methods, apparatuses, and/or systems described herein will be apparent after an understanding of the disclosure of this application. For example, the sequences of operations described herein are merely examples, and are not limited to those set forth herein, but may be changed as will be apparent after an understanding of the disclosure of this application, with the exception of operations necessarily occurring in a certain order.
The features described herein may be embodied in different forms and are not to be construed as being limited to the examples described herein. Rather, the examples described herein have been provided merely to illustrate some of the many possible ways of implementing the methods, apparatuses, and/or systems described herein that will be apparent after an understanding of the disclosure of this application.
Although terms such as “first,” “second,” and “third” , A, B, C, (a), (b), (c), or the like may be used herein to describe various members, components, regions, layers, or sections, these members, components, regions, layers, or sections are not to be limited by these terms. Rather, these terms are only used to distinguish one member, component, region, layer, or section from another member, component, region, layer, or section. Thus, a first member, component, region, layer, or section referred to in the examples described herein may also be referred to as a second member, component, region, layer, or section without departing from the teachings of the examples.
Throughout the specification, when a component is described as being “connected to,” or “coupled to” another component, it may be directly “connected to,” or “coupled to” the other component, or there may be one or more other components intervening therebetween. In contrast, when an element is described as being “directly connected to,” or “directly coupled to” another element, there can be no other elements intervening therebetween.
As used herein, the singular forms “a”, “an”, and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. As used herein, the term “and/or” includes any one and any combination of any two or more of the associated listed items. As used herein, the terms “include,” “comprise,” and “have” specify the presence of stated features, numbers, operations, elements, components, and/or combinations thereof, but do not preclude the presence or addition of one or more other features, numbers, operations, elements, components, and/or combinations thereof.
Unless otherwise defined, all terms used herein including technical or scientific terms have the same meanings as those generally understood consistent with and after an understanding of the present disclosure. Terms, such as those defined in commonly used dictionaries, should be construed to have meanings matching with contextual meanings in the relevant art and the present disclosure, and are not to be construed as an ideal or excessively formal meaning unless otherwise defined herein.
The use of the term “may” herein with respect to an example or embodiment (e.g., as to what an example or embodiment may include or implement) means that at least one example or embodiment exists where such a feature is included or implemented, while all examples are not limited thereto.
Hereinafter, examples will be described in detail with reference to the accompanying drawings. When describing the examples with reference to the accompanying drawings, like reference numerals refer to like components and a repeated description related thereto will be omitted.
The neural network or an artificial neural network (ANN) may generate mapping between input information and output information, and may have a generalization capability to infer a relatively correct output with respect to input information that has not been used for training. The neural network may refer to a general model that has an ability to solve a problem or perform tasks, as non-limiting examples, where nodes form the network through connections and other parameter adjustment through training.
The neural network may be implemented as an architecture having a plurality of layers including an input layer, hidden information and layers, and an output. In the neural network layer, an input image or map may be convoluted with a filter called a kernel, and as a result, a plurality of feature maps may be output. The output feature maps may be again convoluted in a subsequent convolutional layer as input feature maps with another kernel, and a plurality of new feature maps may be output. After the convolution operations are repeatedly performed, and potentially, other layer operations performed, the recognition or classification results of features of the input image through the neural network may be finally output, as non-limiting examples.
The neural network may be a machine learning model structure. In another example, a neural network layer may extract feature data from input data and provide an inference based on the feature data. The feature data may also be data associated with a feature obtained by abstracting input data. The neural network may map input data and output data in a nonlinear relationship based on deep learning, to generate such inferences. Deep learning, such as, through back propagation for multiple hidden layers of a neural network may generate a trained neural network for various purposes or tasks, such as speech recognition or speech transliteration from a big data set, may map input data and output data to each other through supervised and/or unsupervised learning, as only examples.
In an example, training an artificial neural network may indicate determining and adjusting weights and biases between layers or weights and biases among a plurality of nodes belonging to different layers adjacent to one another, as only non-limiting examples of such parameters.
Referring to
The system 100 may perform machine learning on the basic neural network 120 based on training data stored in a database (DB) 110. In the example of
In an example, the system 100 may train the basic neural network 120 through supervised learning. The system 100 may perform training based on an adjustment algorithm (e.g., a stochastic gradient descent scheme) and a loss function. The training data used for the training may include input data to be input to the neural network and validation data (or ground truth) corresponding to the input data. The basic neural network 120 may process the input data included in the training data to output result data. The system 100 may determine a neural network loss based on a comparison result between the validation data and the result data output from the basic neural network 120 and may search for an optimal architecture that minimizes the neural network loss.
The system 100 may search for the optimal architecture for a target neural network 130 by performing a multi-objective neural architecture search (NAS) method. The system 100 may not sample an architecture of the basic neural network 120. The system 100 may set multiple candidate operations for each layer of the basic neural network 120, select the most suitable candidate operation among such candidate operations, and search for the optimal architecture. Through such a search method, the system 100 may perform efficient optimization of the neural network and save time, energy, and computing resources.
The system 100 may output the target neural network 130 having the optimal architecture based on a given purpose (e.g., object classification, object recognition, voice recognition, etc.) through a training process. Searching for an optimal architecture may include determining an operation performed by each layer of the neural network and determining optimal values of parameters of the neural network. The system 100 may be executed by an apparatus for searching for the optimal architecture of the neural network (e.g., a computing apparatus 200 in
In an example, the system 100 may consider a hardware resource constraint when searching for an optimal architecture for the target neural network 130. The system 100 may perform optimization considering a hardware resource that is used when the neural network is executed and a validation loss for a task performed by the neural network. The system 100 may search for the target neural network 130 considering the hardware resource needed to operate the neural network. The hardware resource may be, for example, power consumption, memory demand, the number of operations (e.g., the number of multiply-accumulate (MAC) operations), a processing time, and a graphics processing unit (GPU) occupancy. The system 100 may consider one or more hardware resources, and in addition to the hardware resources described above, any hardware resource that may be observed numerically may be considered without limitation.
When a candidate architecture for the basic neural network 120 is determined, the system 100 may determine the neural network loss of the candidate architecture and a hardware resource loss of the hardware resources and may search for a target architecture that minimizes the neural network loss and the hardware resource loss. The neural network loss and the hardware resource loss may include an optimization loss for determining the target architecture.
When the hardware resource loss is to be determined, the system 100 may determine the hardware resource loss based on a prediction value of the hardware resource output using an actual measurement value of the hardware resource needed by the neural network of the candidate architecture and a hardware resource prediction module, such as, a hardware resource prediction module 420 in
As described above, the system 100 may search for the optimal architecture of the neural network considering the hardware resource constraint and may perform optimization in a short period of time. In addition, the system 100 may search for the optimal architecture considering an actual hardware resource measurement value.
Referring to
The memory 220 may store a variety of data used by components (e.g., the processor 210) of the computing apparatus 200. A variety of data may include, for example, computer-readable instructions and input data or output data for an operations related thereto. The memory 220 may include any one or any combination of a volatile memory and a non-volatile memory.
The volatile memory device may be implemented as a dynamic random-access memory (DRAM), a static random-access memory (SRAM), a thyristor RAM (T-RAM), a zero capacitor RAM (Z-RAM), or a twin transistor RAM (TTRAM).
The non-volatile memory device may be implemented as an electrically erasable programmable read-only memory (EEPROM), a flash memory, a magnetic RAM (MRAM), a spin-transfer torque (STT)-MRAM, a conductive bridging RAM(CBRAM), a ferroelectric RAM (FeRAM), a phase change RAM (PRAM), a resistive RAM (RRAM), a nanotube RRAM, a polymer RAM (PoRAM), a nano floating gate Memory (NFGM), a holographic memory, a molecular electronic memory device), or an insulator resistance change memory. Further details regarding the memory 220 is provided below.
The processor 210 may control an overall operation of the computing apparatus 200 and may execute corresponding processor-readable instructions for performing operations of the computing apparatus 200. The processor 210 may execute, for example, software, to control one or more hardware components, such as other components described below in
In an example, as at least a part of data processing or operations, the processor 210 may store instructions or data in the memory 220, execute the instructions and/or process data stored in the memory 220, and store resulting data obtained therefrom in the memory 220. The processor 200 may be a data processing device implemented by hardware including a circuit having a physical structure to perform desired operations. For example, the desired operations may include code or instructions included in a program.
The hardware-implemented data processing device may include, for example, a main processor (e.g., a central processing unit (CPU), a field-programmable gate array (FPGA), or an application processor (AP)) or an auxiliary processor (e.g., a GPU, a neural processing unit (NPU), an image signal processor (ISP), a sensor hub processor, or a communication processor (CP)) that is operable independently of, or in conjunction with the main processor. Further details regarding the processor 210 is provided below.
The processor 210 may read/write neural network data, for example, text data, voice data, image data, feature map data, kernel data, biases, weights, for example, connection weight data, hyperparameters, and other parameters etc., from/to the memory 220 and implement a neural network using the read/written data. When the neural network is implemented, the processor 210 may repeatedly perform operations between an input and parameters, in order to generate data with respect to an output. Here, in an example convolution layer, a number of convolution operations may be determined, depending on various factors, such as, for example, the number of channels of the input or input feature map, the number of channels of the kernel, a size of the input feature map, a size of the kernel, number of the kernels, and precision of values. Such a neural network may be implemented as a complicated architecture, where the processor 210 performs convolution operations with an operation count of up to hundreds of millions to tens of billions, and the frequency at which the processor 210 accesses the memory 220 for the convolution operations rapidly increases.
The processor 210 may learn a candidate architecture of a neural network (e.g., the basic neural network 120 in
The processor 210 may measure hardware resources needed (or used) to operate the neural network of the candidate architecture. A measured hardware resource may include, for example, one or more of power consumption, memory demand, the number of operations, and a processing time when the neural network of the candidate architecture operates, but is not limited thereto. The processor 210 may measure the hardware resource to determine a hardware resource measurement value.
The processor 210 may predict the hardware resource that would be needed when the neural network of the candidate architecture operates using a hardware resource prediction module, such as, for example, the hardware resource prediction module 420 in
The processor 210 may determine a hardware resource loss based on the measured hardware resource and the predicted hardware resource. The processor 210 may determine the hardware resource loss based on a difference between the measured hardware resource and the predicted hardware resource as well as a loss function. For example, when a neural network of a candidate architecture operates, the processor 210 may determine a hardware resource loss by applying a difference between an actual processing time measurement value and an expected processing time value output from a hardware resource prediction model to the loss function.
The processor 210 may determine a target architecture of the neural network based on the neural network loss and the hardware resource loss. The processor 210 may determine the target architecture and target parameters for reducing the neural network loss and the hardware resource loss. The processor 210 may update the parameters of the candidate architecture such that the hardware resource loss is minimized, e.g., minimized in a direction towards smaller or minimal loss, or otherwise based on a minimal threshold, as non-limiting examples. The processor 210 may determine an optimization loss including the neural network loss and the hardware resource loss and may select a target operation that minimizes the optimization loss among candidate operations of each layer included in the neural network of the candidate architecture. The processor 210 may select the target operation of each layer that minimizes the neural network loss and the hardware resource loss of the entire neural network and may update the parameters of the neural network. The processor 210 may determine the candidate architecture and a weighted sum of the neural network loss, based on the parameters of the candidate architecture, and the hardware resource loss based on the parameters of the candidate architecture, as the optimization loss and may determine the target architecture that minimizes the optimization loss.
The operations performed by the computing apparatus 200 described above may be variously applied to neural network-based algorithms that may operate in embedded systems as well as mobile devices (e.g., wearable devices, smartphones, etc.)
Referring to
A computing apparatus (e.g., the computing apparatus 200 in
Referring to
Parameters of the candidate architecture “a” 412 may include a parameter for a selected candidate operation among candidate operations of each layer of the neural network and a parameter indicating an operation characteristic for each selected candidate operation. For example, assuming that the candidate operations of a layer included in the neural network include a 3×3 kernel-based convolution operation and a 5×5 kernel-based convolution operation, the parameters of the candidate architecture “a” 412 may include a parameter indicating which convolution operation is selected among the convolution operations and a kernel parameter of the selected convolution operation. Here, a convolution operation may be implemented by a convolution layer.
A computing apparatus (e.g., the computing apparatus 200 in
In operation 432, the computing apparatus may perform operations using the neural network having the candidate architecture “a” 412 and in operation 434, the computing apparatus may actually measure a hardware resource of a whole neural network needed in a process of performing the corresponding operation. The hardware resource to be measured may include, for example, power consumption, memory demand, the number of operations, a processing time, and the like. When the hardware resource is measured in operation 434, a hardware resource measurement value LATM(a) 436 for the candidate architecture “a” 412 may be determined. In an example, the hardware resource may be measured by performing a neural network operation on an architecture determined through a max operation. Since a process 430 including such processes does not have differentiability, a forward operation and backward operation definition for a process 430 may not be determined. To resolve such an issue, the hardware resource prediction module 420 may be used.
The computing apparatus may predict the hardware resource of the whole neural network to be needed when the neural network having the candidate architecture “a” 412 using the hardware resource prediction module 420 performs an operation. A hardware resource prediction value LATP(a) 422 of the candidate architecture “a” 412 may be determined through the hardware resource prediction module 420. The hardware resource prediction module 420 may receive parameters of the candidate architecture “a” 412 as an input, predict the hardware resource of the neural network of the candidate architecture “a” 412 based on the input parameters, and may output the hardware resource prediction value LATP(a) 422. The operation in the hardware resource prediction module 420 may be performed as a differentiable operation.
The hardware resource prediction module 420 may be a model or a neural network trained through a training process such that a hardware resource prediction value predicted to be needed or used by the architecture of the neural network are output based on the parameters of the architecture of the neural network. However, in addition to the model or the neural network, the hardware resource prediction module 420 may be implemented by other approaches that may predict the hardware resource of the neural network based on the candidate architecture “a” 412.
The computing apparatus may determine a hardware resource loss LHW(a) 442 of the candidate architecture “a” 412 based on the hardware resource measurement value LATM(a) 436 and the hardware resource prediction value LATP(a) 422. The hardware resource loss LHW(a) 442 may be defined such that a difference between a hardware resource measurement value LATM(a) 436 and a hardware resource prediction value LATP(a) 422 is minimized.
In an example, the hardware resource loss LHW(a) 442 may be determined based on LHW1(a) indicating a loss by a difference between the hardware resource measurement value LATM(a) 436 and the hardware resource prediction value LATP(a) 422 and LHW2(a) that is a factor for optimizing the hardware resource (e.g., a factor for minimizing latency). LHW1(a) and LHW2(a) may be determined by Equations 1 and 2 below, respectively.
L
HW1(a)=(LATM(a)−LATP(a))2 [Equation 1]
L
HW2(a)=(LAT M(a))2 [Equation 2]
The hardware resource loss LHW(a) 442 may be determined as a weighted sum between LHW1(a) and LHW2(a), for example, as represented by Equation 3 below.
L
HW(a)=LHW1(a)+w×LHW2(a) [Equation 3]
In Equation 3, w denotes a weight applied to LHW2(a) and may be, for example, a preset constant. In an example, a weight may be applied only to LHW1(a), or different weights may be applied to LHW1(a) and LHW2(a), respectively.
The computing apparatus may determine the hardware resource loss LHW(a) 442 by applying a difference between the hardware resource measurement value LATM(a) 436 and the hardware resource prediction value LATP(a) 422 to a loss function. In an example, the loss functions LHW that are described n equations 1-3 may be defined in advance.
The computing apparatus may determine an optimization loss including the neural network loss LTASK(w(a), a) 418 and the hardware resource loss LHW(a) 442 and may determine a target architecture by selecting a target operation that minimizes the optimization loss among candidate operations of each layer included in the neural network of the candidate architecture “a” 412. For example, the target architecture may be determined by searching for parameters w(a) 416 of the candidate architecture “a” 412 and the candidate architecture “a” that minimizes the optimization loss, as represented by Equation 4 below. In the example of Equation 4, an optimization loss may be determined by a weighted sum of the neural network loss LTASK(w(a), a) 418 and a result of weights being applied to the hardware resource loss LHW(a) 442.
The computing apparatus may effectively search for the target architecture of the neural network in a short period of time and may consider the hardware resource needed by the neural network as an optimization constraint when the target architecture is selected. In
Referring to
In operation 520, the computing apparatus may determine a neural network loss based on parameters of the candidate architecture of the neural network. The computing apparatus may learn the candidate architecture of the neural network using training data and may determine the neural network loss. The computing apparatus may determine the neural network loss based on a difference between validation data and result data processed by the neural network. When the difference between the result data output from the neural network of the candidate architecture and targeted verification data increases, the neural network loss may increase.
In operation 530, the computing apparatus may measure the physical hardware resource used for operating the neural network of the candidate architecture. The measured hardware resource may include, for example, one or more of power consumption, memory demand, the number of operations, and a processing time when the neural network of the candidate architecture operates, but is not limited thereto.
In operation 540, the computing apparatus may predict the hardware resource that would be needed or used when the neural network of the candidate architecture operates using a hardware resource prediction module (e.g., the hardware resource prediction module 420 in
In operation 550, the computing apparatus may determine the hardware resource loss, based on the measured hardware resource and the predicted hardware resource. The computing apparatus may determine the hardware resource loss, based on a difference between the measured hardware resource, and the predicted hardware resource, as well as a loss function. In an example, the loss function may be predefined.
In operation 560, the computing apparatus may determine a target architecture and target parameters of the neural network, based on the neural network loss and the hardware resource loss. The computing apparatus may determine the target architecture and the target parameters that minimize an optimization loss including the neural network loss and the hardware resource loss. The computing apparatus may determine the target architecture by selecting a target operation that minimizes the optimization loss among the candidate operations of each layer included in the neural network of the candidate architecture. The computing apparatus may determine the candidate architecture and the weighted sum of the neural network loss, based on the parameters of the candidate architecture, and the hardware resource loss, based on the parameters of the candidate architecture as the optimization loss, and determine the target architecture that minimizes the optimization loss.
Referring to
The electronic device 600 may include a processor 610, a memory 620, a camera 630, a sensor 640, an output device 650, and a communication device 660. At least some of the components of the electronic device 600 may be coupled mutually and exchange signals (e.g., commands or data) therebetween via an inter-peripheral communication interface 670 (e.g., a bus, a general-purpose input and output (GPIO) interface, a serial peripheral interface (SPI), a mobile industry processor interface (MIPI)).
The processor 610 may be a processing device implemented by hardware including a circuit having a physical structure to perform operations. For example, the operations may be implemented by execution of computer-readable instructions that configure the processing device to perform any one, or any combination, of the operations described.
For example, the hardware-implemented data processing device may include a microprocessor, a central processing unit (CPU), a processor core, a multi-core processor, a multiprocessor, an application-specific integrated circuit (ASIC), and a field-programmable gate array (FPGA). Further details regarding the processor 610is provided below.
The processor 610 may control overall operations of the electronic device 600 and execute functions and instructions to be executed by the electronic device 600. The processor 610 may perform operations of the computing apparatus described above with reference to
The memory 620 may store the instructions executable by the processor 610, input/output data, and various neural network parameters. The memory 620 may include a volatile memory and/or a non-volatile memory. The volatile memory device may be implemented as a dynamic random-access memory (DRAM), a static random-access memory (SRAM), a thyristor RAM (T-RAM), a zero capacitor RAM (Z-RAM), or a twin transistor RAM (TTRAM).
The non-volatile memory device may be implemented as an electrically erasable programmable read-only memory (EEPROM), a flash memory, a magnetic RAM (MRAM), a spin-transfer torque (STT)-MRAM, a conductive bridging RAM(CBRAM), a ferroelectric RAM (FeRAM), a phase change RAM (PRAM), a resistive RAM (RRAM), a nanotube RRAM, a polymer RAM (PoRAM), a nano floating gate Memory (NFGM), a holographic memory, a molecular electronic memory device), or an insulator resistance change memory. Further details regarding the memory 620 is provided below.
The camera 630 may capture an image. The camera 630 may obtain, for example, a color image, a black and white image, a gray image, an infrared (IR) image, or a depth image. For example, an image captured by the camera 630 may be used as an input to a convolution layer of the CNN.
The sensor 640 may detect an operational state (e.g., power or temperature) of the electronic device 600 or an environmental state (e.g., a state of a user) external to the electronic device 600 and may generate an electric signal or data value corresponding to the detected state. The sensor 640 may include, for example, a gesture sensor, a gyro sensor, an atmospheric pressure sensor, a magnetic sensor, an acceleration sensor, a grip sensor, a proximity sensor, a color sensor, an IR sensor, a biometric sensor, a temperature sensor, a humidity sensor, or an illuminance sensor. The sensor 640 may include sensors used to measure various resources of the electronic device 600.
The output device 650 may provide an output of the electronic apparatus 600 to a user through a visual, auditory, or tactile channel. The output device 650 may include, for example, a display devices, such as, a liquid crystal display or a light-emitting diode (LED)/organic LED display, a micro-LED, a touch screen, a speaker, a vibration generating device, or any other device that may provide the output to the user. In an example, the output device 650 may also be configured to receive an input from the user, such as, a voice input, a gesture input, or a touch input.
The communication device 660 may be construed to establish a direct (or wired) communication channel or a wireless communication channel between the electronic device 600 and an external electronic device and may support communication through the established communication channel. In an example, the communication device 660 may include a wireless communication module (e.g., a cellular communication module, a short-range wireless communication module, or a global navigation satellite system (GNSS) communication module) or a wired communication module (e.g., a local area network (LAN) communication module, or a power line communication (PLC) module). The wireless communication device may communicate with the external device via a short-range communication network (e.g., Bluetooth™, wireless-fidelity (Wi-Fi) direct, or IR data association (IrDA)) or a long-range communication network (e.g., a legacy cellular network, a 5G network, a next-generation communication network, the Internet, or a computer network (e.g., a LAN or a wide area network (WAN)).
In an example, the electronic device 600 may be an AR provision apparatus (or device) (e.g., AR glasses) using a neural network-based algorithm. The AR provision apparatus may be worn on a face of a user to provide the user with content related to an AR service and/or a virtual reality (VR) service. The processor 610 may perform a processing operation using a trained neural network having the target architecture. The camera 630 may capture an image for generating AR content, and the processor 610 may generate the AR content by processing the image using the neural network having the target architecture. For example, the processor 610 may generate AR content by recognizing an object in the image obtained through the camera 630 and superimposing virtual content on a recognized object area or an area around the object.
The processor 610 may determine the target architecture of the neural network through the same process described with reference to
The computing apparatus 200, processor 210, processor 610, and other apparatuses, devices, units, modules, and components described herein are implemented by hardware components. Examples of hardware components that may be used to perform the operations described in this application where appropriate include controllers, sensors, generators, drivers, memories, comparators, arithmetic logic units, adders, subtractors, multipliers, dividers, integrators, and any other electronic components configured to perform the operations described in this application. In other examples, one or more of the hardware components that perform the operations described in this application are implemented by computing hardware, for example, by one or more processors or computers. A processor or computer may be implemented by one or more processing elements, such as an array of logic gates, a controller and an arithmetic logic unit, a digital signal processor, a microcomputer, a programmable logic controller, a field-programmable gate array, a programmable logic array, a microprocessor, or any other device or combination of devices that is configured to respond to and execute instructions in a defined manner to achieve a desired result. In one example, a processor or computer includes, or is connected to, one or more memories storing instructions or software that are executed by the processor or computer. Hardware components implemented by a processor or computer may execute instructions or software, such as an operating system (OS) and one or more software applications that run on the OS, to perform the operations described in this application. The hardware components may also access, manipulate, process, create, and store data in response to execution of the instructions or software. For simplicity, the singular term “processor” or “computer” may be used in the description of the examples described in this application, but in other examples multiple processors or computers may be used, or a processor or computer may include multiple processing elements, or multiple types of processing elements, or both. For example, a single hardware component or two or more hardware components may be implemented by a single processor, or two or more processors, or a processor and a controller. One or more hardware components may be implemented by one or more processors, or a processor and a controller, and one or more other hardware components may be implemented by one or more other processors, or another processor and another controller. One or more processors, or a processor and a controller, may implement a single hardware component, or two or more hardware components. A hardware component may have any one or more of different processing configurations, examples of which include a single processor, independent processors, parallel processors, single-instruction single-data (SISD) multiprocessing, single-instruction multiple-data (SIMD) multiprocessing, multiple-instruction single-data (MISD) multiprocessing, multiple-instruction multiple-data (MIMD) multiprocessing, a controller and an arithmetic logic unit (ALU), a DSP, a microcomputer, an application-specific integrated circuit (ASIC), a field programmable gate array (FPGA), a programmable logic unit (PLU), a central processing unit (CPU), a graphics processing unit (GPU), a neural processing unit (NPU), or any other device capable of responding to and executing instructions in a defined manner.
The methods that perform the operations described in this application are performed by computing hardware, for example, by one or more processors or computers, implemented as described above executing instructions or software to perform the operations described in this application that are performed by the methods . For example, a single operation or two or more operations may be performed by a single processor, or two or more processors, or a processor and a controller. One or more operations may be performed by one or more processors, or a processor and a controller, and one or more other operations may be performed by one or more other processors, or another processor and another controller. One or more processors, or a processor and a controller, may perform a single operation, or two or more operations.
The Instructions or software to control a processor or computer to implement the hardware components and perform the methods as described above are written as computer programs, code segments, instructions or any combination thereof, for individually or collectively instructing or configuring the processor or computer to operate as a machine or special-purpose computer to perform the operations performed by the hardware components and the methods as described above. In one example, the instructions or software include machine code that is directly executed by the processor or computer, such as machine code produced by a compiler. In an example, the instructions or software includes at least one of an applet, a dynamic link library (DLL), middleware, firmware, a device driver, an application program storing the method for searching for an optimal architecture of a neural network. In another example, the instructions or software include higher-level code that is executed by the processor or computer using an interpreter. Programmers of ordinary skill in the art can readily write the instructions or software based on the block diagrams and the flow charts illustrated in the drawings and the corresponding descriptions in the specification, which disclose algorithms for performing the operations performed by the hardware components and the methods as described above.
The instructions or software to control a processor or computer to implement the hardware components and perform the methods as described above, and any associated data, data files, and data structures, are recorded, stored, or fixed in or on one or more non-transitory computer-readable storage media. Examples of a non-transitory computer-readable storage medium include read-only memory (ROM), random-access programmable read only memory (PROM), electrically erasable programmable read-only memory (EEPROM), random-access memory (RAM), magnetic RAM (MRAM), spin-transfer torque(STT)-MRAM, static random-access memory (SRAM), thyristor RAM (T-RAM), zero capacitor RAM (Z-RAM), twin transistor RAM (TTRAM), conductive bridging RAM(CBRAM), ferroelectric RAM (FeRAM), phase change RAM (PRAM), resistive RAM(RRAM), nanotube RRAM, polymer RAM (PoRAM), nano floating gate Memory(NFGM), holographic memory, molecular electronic memory device), insulator resistance change memory, dynamic random access memory (DRAM), static random access memory (SRAM), flash memory, non-volatile memory, CD-ROMs, CD−Rs, CD+Rs, CD−RWs, CD+RWs, DVD-ROMs, DVD−Rs, DVD+Rs, DVD−RWs, DVD+RWs, DVD-RAMs, BD-ROMs, BD-Rs, BD-R LTHs, BD-REs, blue-ray or optical disk storage, hard disk drive (HDD), solid state drive (SSD), flash memory, a card type memory such as multimedia card micro or a card (for example, secure digital (SD) or extreme digital (XD)), magnetic tapes, floppy disks, magneto-optical data storage devices, optical data storage devices, hard disks, solid-state disks, and any other device that is configured to store the instructions or software and any associated data, data files, and data structures in a non-transitory manner and providing the instructions or software and any associated data, data files, and data structures to a processor or computer so that the processor or computer can execute the instructions. In an example, the instructions or software and any associated data, data files, and data structures are distributed over network-coupled computer systems so that the instructions and software and any associated data, data files, and data structures are stored, accessed, and executed in a distributed fashion by the one or more processors or computers.
While this disclosure includes specific examples, it will be apparent after an understanding of the disclosure of this application that various changes in form and details may be made in these examples without departing from the spirit and scope of the claims and their equivalents. The examples described herein are to be considered in a descriptive sense only, and not for purposes of limitation. Descriptions of features or aspects in each example are to be considered as being applicable to similar features or aspects in other examples. Suitable results may be achieved if the described techniques are performed in a different order, and/or if components in a described system, architecture, device, or circuit are combined in a different manner, and/or replaced or supplemented by other components or their equivalents. Therefore, the scope of the disclosure is defined not by the detailed description, but by the claims and their equivalents, and all variations within the scope of the claims and their equivalents are to be construed as being included in the disclosure.
Number | Date | Country | Kind |
---|---|---|---|
10-2021-0171979 | Dec 2021 | KR | national |