The present disclosure relates to the technical field of information technology, and particularly to an information processing method and related products.
With the growing information technology and people's ever-increasing demand, the need for timeliness of information becomes stronger. At present, a terminal obtains and processes information based on a general-purpose processor, such as running a specified application in a general-purpose processor to realize language translation, reply, and the like. However, in practical applications, this way of obtaining information by a general-purpose processor running a software program may be limited by the operating speed of the general-purpose processor, and in particular, when the general-purpose processor has a large load, the efficiency of obtaining information may be low and the delay may be long.
Examples of the present disclosure provide an information computation method and related products, which can increase processing speed and efficiency of a computation device.
In a first aspect, an example of the present disclosure provides an information processing method which is applied to a computation device, where the computation device includes a communication unit and an operation unit. The method includes:
In some possible examples, the computation device further includes a register unit and a controller unit, and the controlling, by the computation device, the operation unit to obtain and call an operation instruction to perform voice identification processing on the voice to be identified to obtain target text information corresponding to the voice to be identified includes:
In some possible examples, the first operation instruction is an instruction for forming a time-frequency conversion algorithm, where the time-frequency conversion algorithm includes at least one of the following: a Fast Fourier Transform algorithm, a rectangular window algorithm, a Hamming window algorithm, and a neural network algorithm.
In some possible examples, the calling the second operation instruction associated with the network model to perform voice identification processing on the intermediate voice to obtain target text information includes:
In some possible examples, the third operation instruction is an instruction associated with a sorting algorithm, where the sorting algorithm includes any one of the following: a Viterbi algorithm, a beam search algorithm, an A* algorithm, and a WFST algorithm.
In some possible examples, when the network model is a neural network model, the neural network model includes any one or more of the following functional layers: a convolution operation layer, a pooling layer, an activation softmax layer, a batch normalization batch normalization layer, and a fully connected layer; where the function layers are composed of at least one pre-stored operation instruction.
In some possible examples, the functional layers composing the neural network model and a count of the functional layers are customized by a user side or a terminal side.
In some possible examples, the neural network model includes any one of the following: a deep neural network (DNN) model, a convolutional neural network (CNN) model, a recurrent neural network (RNN) model, a back-propagation (BP) neural network model, a long and short term memory (LSTM) network model, and a gate repeat unit (GRU) model.
In some possible examples, the non-neural network model includes any one of the following: a Markov model, an n-gram model, and a Bayesian model.
In some possible examples, the computation device further includes a data access unit and a storage medium,
In some possible examples, the operation unit includes a primary operation module and a plurality of secondary operation modules, where the primary operation module is interconnected with the plurality of secondary operation modules by an interconnection module, and when the operation instruction is a convolution operation instruction,
In some possible examples, the performing subsequent operations on the intermediate result includes:
controlling, by the computation device, the primary operation module to add bias data to the intermediate result, and then performing an activation operation.
In some possible examples, the primary operation module includes a first operation unit, where the first operation unit includes a vector addition unit and an activation unit,
In some possible examples, the primary operation module includes a first storage unit, a first operation unit, a first data dependency determination unit, and a first storage unit. The above method further includes:
In some possible examples, each secondary operation module includes a second operation unit, where the second operation unit includes a vector multiplication unit and an accumulation unit,
In some possible examples, each secondary operation unit includes a second operation unit, a second data dependency determination unit, a second storage unit, and a third storage unit. The above method further includes:
In some possible examples, the first data dependency or the second data dependency ensures that there is no consistency conflict in reading and writing in the following manners: storage addresses corresponding to data/instructions stored in the corresponding storage unit do not overlap; or determining whether there is dependency between a control signal that has not been executed and data of a control signal that is being executed, if there is no dependency, the control signal is allowed to be issued immediately, otherwise, the control signal is not allowed to be issued until all control signals on which the control signal is dependent have been executed, where
In some possible examples, the computation device controls the plurality of secondary operation modules to compute respective output scalars in parallel by using the same input data and respective convolution kernels.
In some possible examples, an activation function active used by the primary operation module may be any of the following non-linear functions: sigmoid, tanh, relu, softmax, or may be a linear function.
In some possible examples, the interconnection module forms a data channel for continuous or discrete data between the primary operation module and the plurality of secondary operation modules. The interconnection module has any of the following structures: a tree structure, a ring structure, a grid structure, a hierarchical interconnection, and a bus structure.
In a second aspect, an example of the present disclosure provides a computation device which includes a function unit configured to perform the methods of the first aspect.
In a third aspect, an example of the present disclosure provides a computer readable storage medium on which a computer program used for electronic data exchange is stored, where the computer program enables a computer to perform the methods of the first aspect.
In a fourth aspect, an example of the present disclosure further provides a computer program product which includes a non-transitory computer readable storage medium storing a computer program. The computer program may cause a computer to perform the methods of the first aspect.
In a fifth aspect, an example of the present disclosure provides a chip which includes the computation device of the second aspect.
In a sixth aspect, an example of the present disclosure provides a chip package structure which includes the chip of the fifth aspect.
In a seventh aspect, an example of the present disclosure provides a board card which includes the chip package structure of the sixth aspect.
In an eighth aspect, an example of the present disclosure provides an electronic device which includes the board card of the seventh aspect.
In some examples, the electronic device includes a data processing device, a robot, a computer, a printer, a scanner, a tablet, a smart terminal, a mobile phone, a traffic recorder, a navigator, a sensor, a webcam, a server, a cloud-based server, a camera, a video camera, a projector, a watch, a headphone, a mobile storage, a wearable device, a vehicle, a household appliance, and/or a medical equipment.
In some examples, the vehicle includes an airplane, a ship, and/or a car. The household electrical appliance includes a television, an air conditioner, a microwave oven, a refrigerator, a rice cooker, a humidifier, a washing machine, an electric lamp, a gas cooker, and a range hood. The medical equipment includes a nuclear magnetic resonance spectrometer, a B-ultrasonic scanner, and/or an electrocardiograph.
Technical effects of implementing the examples of the present disclosure are as follows:
It can be seen that through the examples of the present disclosure, the computation device may control a communication unit to obtain a voice to be identified input by the user; then control an operation unit to obtain and call the operation instruction to perform voice identification processing on the voice to be identified, thereby obtaining target text information corresponding to the voice to be identified, where the operation instruction is a preset instruction for voice identification; in this way, voice identification may be realized intelligently, fast, and accurately, compared with the prior art using a general-purpose processor for voice identification, the present disclosure has technical effects of lower power consumption and faster speed.
In order to illustrate the technical solutions in the examples of the present disclosure more clearly, the drawings to be used in the description of the examples are briefly explained below. Obviously, the drawings in the description below are some examples of the present disclosure. Other drawings can be obtained according to the disclosed drawings without any creative effort by those skilled in the art.
Technical solutions in examples of the present disclosure will be described clearly and completely hereinafter with reference to the accompanied drawings in the examples of the present disclosure. Obviously, the examples to be described are merely some rather than all examples of the present disclosure. All other examples obtained by those of ordinary skill in the art based on the examples of the present disclosure without creative efforts shall fall within the protection scope of the present disclosure.
Terms such as “first”, “second”, “third”, and “fourth” in the specification, the claims, and the drawings are used for distinguishing different objects rather than describing a specific order. In addition, terms such as “include”, “have”, and any variant thereof are used for indicating non-exclusive inclusion. For instance, a process, a method, a system, a product, or an equipment including a series of steps or units is not limited to the listed steps or units, but optionally includes steps or units that are not listed, or optionally includes other steps or units inherent to the process, the method, the product, or the equipment.
Reference to “example” means that a particular feature, a structure, or a characteristic described in conjunction with the example may be included in at least one example of the present disclosure. The term used in various places in the specification does not necessarily refer to the same example, nor does it refer to an example that is mutually exclusive, independent, or alternative to other examples. It can be explicitly and implicitly understood by those skilled in the art that the examples described herein may be combined with other examples.
First, a computation device used in the present disclosure is introduced.
the operation unit 614 include at least two of the following: an addition arithmetic unit, a multiplication arithmetic unit, a comparator, and an activation arithmetic unit.
The interconnection module 613 is configured to control a connection relationship of the arithmetic units in the operation unit 614 so that the at least two arithmetic units form a different computation topology.
The instruction storage unit (which may be a register unit, an instruction cache, or a scratchpad memory) 612 is configured to store the operation instruction, an address of a data block in the storage medium, and a computation topology corresponding to the operation instruction.
The operation instruction may include an operation field and an opcode. Taking a convolution operation instruction as an example, as shown in Table 1, register 0, register 1, register 2, register 3, and register 4 may be operation fields. Each of register 0, register 1, register 2, register 3, and register 4 may be one or a plurality of registers.
The storage medium 611 may be an off-chip memory, and in certain applications, may also be an on-chip memory for storing a data block. The data block may be n-dimensional data, where n is an integer greater than or equal to 1. For instance, when n=1, the data is one-dimensional data, which is a vector; when n=2, the data is two-dimensional data, which is a matrix; and when n is equal to or greater than 3, the data is multi-dimensional data.
The control unit 615 is configured to fetch an operation instruction, an operation field corresponding to the operation instruction, and a first computation topology corresponding to the operation instruction from the register unit 612, and decode the operation instruction into an execution instruction. The execution instruction is configured to control the operation unit to perform an operation, transfer the operation field to the data access unit 616, and transfer the computation topology to the interconnection module 613.
The data access unit 616 is configured to fetch a data block corresponding to the operation field from the storage medium 611 and transfer the data block to the interconnection module 613.
The interconnection module 613 is configured to receive the first computation topology and the data block. In an example, the interconnection module 613 is further configured to rearrange the data block according to the first computation topology.
The operation unit 614 is configured to call an arithmetic unit of the operation unit 614 according to the execution instruction to perform an operation on the data block to obtain an operation result, transfer the operation result to the data access unit, and store the result in the storage medium. In an example, the operation unit 614 is configured to call an arithmetic unit according to the first computation topology and the execution instruction to perform an operation on the rearranged data block to obtain an operation result, transfer the operation result to the data access unit, and store the result in the memory.
In another example, the interconnection module 613 is configured to form the first computation topology according to the connection relationships of the arithmetic units in the control computation unit 614.
An interconnection module is set in the computation device provided by the present disclosure. The interconnecting module can connect the arithmetic units in the computation unit to obtain a computation topology corresponding to the computation instruction according to the needs of the computation instruction, so that there is no need to store or fetch intermediate data of the computation in subsequent operations of the operation unit. Through this structure, a single instruction can implement a single input and perform operations of a plurality of arithmetic units to obtain a computation result, which improves the computation efficiency.
A computation method of the computation device shown in
A method of performing a convolution operation instruction by the computation device shown in
In addition, the order of addition and multiplication can be reversed.
The technical solution provided by the present disclosure can realize convolution operations according to one instruction which is a convolution operation instruction. There is no need to store or obtain intermediate data of convolution operations (such as a first result, a second result, and a third result). The technical solution may reduce the storing and obtaining operations of intermediate data, and may have technical effects of reducing a corresponding operation step and improving outcomes of convolution operations.
It should be understood that the instruction set used in the present disclosure may include one or a plurality of operation instructions. The operation instruction includes, but is not limited to a COMPUTE instruction (an operation instruction), a CONFIG instruction, an IO instruction, an NOP instruction, a JUMP instruction, a MOVE instruction, etc. The COMPUTE instruction includes, but is not limited to, a convolution CONV instruction, a pooling operation instruction, etc. Specifically, an executable computation instruction in the present disclosure includes:
A convolution operation instruction (pure convolution operation instruction): according to the instruction, the device fetches input data and a convolution kernel of a specified size from a specified address in the memory (optionally a scratchpad memory) respectively, and performs a convolution operation in a convolution operation component. The above-mentioned specified size may be set by the user or manufacturer. For instance, in a computation device of a first manufacturer, the specified size may be set to data of A bit, and in a computation device of a second manufacturer, the specified size may be set to data of B bit. The data of A bit and the data of B bit have different sizes.
The pooling instruction. In an example, the pooling COMPUTE instruction (the pooling operation instruction, which is also referred to as the pooling instruction in the present disclosure) specifically includes:
A batch normalization instruction can be used for a batch normalization computation.
A fully connected instruction may include a fully connected layer forward operation instruction.
A fully connected layer forward operation instruction: according to the instruction, a device fetches weight data and bias data from a specified address in a memory, performs a full connection operation in a computation unit, and writes a result back to a specified address in a scratchpad memory.
The CONFIG instruction configures various constants required by a computation of a current artificial neural network layer before the computation starts. For instance, 1/kernel_area can be obtained by configuration using the CONFIG instruction. In the batch normalization computation, the CONFIG instruction configures various constants required for a current layer before a batch normalization computation begins.
The IO instruction is for reading-in input data required for a computation from an external storage space, and storing data to the external space after the computation finishes.
The NOP instruction is for emptying control instructions in all control instruction cache queues in the current device, and ensuring that all instructions before the NOP instruction are finished. The NOP instruction itself does not include any operations.
The JUMP instruction is for controlling jumping of a next instruction address to be read from an instruction storage unit, so that the jumping of a control flow can be realized.
The MOVE instruction is for moving data of an address in an internal address space of the device to another address in the internal address space of the device. This process is independent of an operation unit and does not occupy resources of the operation unit during execution.
Optionally, operation instructions that can be executed by the computation device may further include:
The computation device can also execute a vector logic instruction, including:
The computation device can also execute a vector comparison operation instruction, including:
The Random-Vector generation instruction may be:
During execution of a convolutional neural network algorithm (a convolution operation instruction) by the computation device shown in
Each computation process includes: selecting corresponding input data xi in the input data layer according to a convolution window, and then performing an addition operation on the input data and the convolution kernel. A computation process of the output data is s=s(Σwxi+b), which is to multiply a convolution kernel W by input data xi, find the sum, add a bias b, and then perform an activation operation s(h) to obtain a final output data s. The multiplication of the convolution kernel and the input data is vector multiplication.
According to the size kx of the convolution kernel on an X axis and the size ky of the convolution kernel on the Y axis, the convolution window firstly selects input data of which the size is the same as that of the convolution kernel from the input data of which the size of the X axis is W and the size of the Y axis is H, performs horizontal translation and then vertical translation according to translation vectors Sx and Sy of the convolution window, and traverses all the input data.
The instruction set includes: convolutional neural network COMPUTE instruction with different functions, a CONFIG instruction, an IO instruction, an NOP instruction, a JUMP instruction, and a MOVE instruction. The above operation instructions will not be further described herein. For details, please refer to related descriptions in the above examples.
Optionally, the instruction set may further include a convolution activation CONV_ACTIVATE instruction.
The convolution activation CONV_ACTIVATE instruction: according to the instruction, the device fetches input data and a convolution kernel of a specified size from a specified address in the scratchpad memory (optionally), performs a convolution operation in a convolution operation component, and then performs an activation function operation on an output result; the above-mentioned specified size may be set by the manufacturer or user.
In one example, the CONV_ACTIVATE instruction includes: a convolution operation instruction and an activation instruction. The activation instruction is configured to perform an activation function operation, and the convolution operation instruction is configured to perform a convolution operation. For details, please refer to related descriptions in the above examples.
The instruction storage unit 1 is configured to read an instruction through the data access unit 3 and store the instruction.
The controller unit 2 is configured to read an instruction from the instruction caching unit 1, decode the instruction into a control signal for controlling the behavior of other modules, and send the instruction to other modules such as the data access unit 3, the primary operation module 5, and the plurality of secondary operation modules 6.
The data access unit 3 can access an external address space, directly read and write data to each storage unit inside the device, and complete the loading and storage of the data.
The interconnection module 4 is configured to connect the primary operation module and the secondary operation modules, and can be implemented into different interconnection topologies (such as tree structure, ring structure, grid structure, hierarchical interconnection, bus structure, etc.).
The first operation unit 51 includes a vector addition unit 511 and an activation unit 512. The first operation unit 51 is configured to receive a control signal from the controller unit and complete various operational functions of the primary operation module 5. The vector addition unit 511 is configured to perform an operation of adding a bias in the forward computation of the convolutional neural network, and perform element-wise addition on bias data and the intermediate results to obtain a bias result. The activation operation unit 512 performs an activation function operation on the bias result. The bias data may be read in from an external address space, or may be stored locally.
The data dependency determination unit 52 is a port for the first operation unit 51 to read/write the first storage unit 53, so as to ensure consistency in reading data from and writing data to the first storage unit 53. At the same time, the first data dependency determination unit 52 is also configured to send data read from the first storage unit 53 to the secondary operation modules through the interconnection module 4. Output data of the secondary operation modules 6 is directly sent to the first operation unit 51 through the interconnection module 4. An instruction output by the controller unit 2 is sent to the operation unit 51 and the first data dependency determination unit 52 to control their behavior.
The storage unit 53 is configured to cache input data and output data used by the primary operation module 5 during a computation process.
The second operation unit 61 is configured to receive a control signal from the controller unit 2 and perform a convolution operation. The second operation unit includes a vector multiplication unit 611 and an accumulation unit 612, which are respectively responsible for a vector multiplication operation and an accumulation operation in a convolution operation.
The second data dependency determination unit 62 is responsible for reading and writing the second storage unit 63 during a computation process. Before performing read/write operations, the second data dependency determination unit 62 first ensures that there is no consistency conflict between the reading and writing of data used by instructions. For instance, all control signals sent to the data dependency unit 62 are stored in the instruction queue inside the data dependency unit 62. In this queue, if a range of data to be read by a reading instruction conflicts with a range of data to be written by a writing instruction that is located at the front of the queue, the instruction can only be executed until a writing instruction depended by the instruction has been executed.
The second storage unit 63 is configured to cache input data and output scalar data of the secondary operation modules 6.
The third storage unit 64 is configured to cache convolution kernel data required by the secondary operation modules 6 in a computation process.
The implementation of a multi-layer convolutional neural network is similar to that of a single-layer convolutional neural network. After an upper layer of the convolutional neural network is executed, an operation instruction of a next layer uses an output data address of the upper layer stored in the primary operation unit as an input data address of this layer. Furthermore, a convolution kernel and an address of bias data in the instruction are changed to an address corresponding to this layer.
The present disclosure uses a device and an instruction set for performing the convolutional neural network forward operation, which solves the problem of the lack of CPU and GPU computation performance, and the problem of high front-end decoding overhead. The present disclosure effectively improves support for the forward operation of a multi-layer convolutional neural network.
By using a dedicated on-chip cache for the forward operation of a multi-layer convolutional neural network, input neurons and intermediate data may be fully reused, which may avoid repeated reading of these data from the memory, reduce the memory access bandwidth, and prevent the memory bandwidth from becoming a performance bottleneck of the forward operation of a multi-layer artificial neural network.
Based on the above examples,
The method may further include a step S104, using, by the computation device, the voice to be identified as an input of the operation unit to call an operation instruction to perform voice identification processing on the voice to be identified to obtain target text information corresponding to the voice to be identified, where
the operation instruction is a preset instruction for voice identification.
The operation instruction includes, but is not limited to, a convolution operation instruction, a pooling instruction, a fully connected instruction, a batch normalization instruction, an activation softmax instruction, a matrix multiplication instruction, a matrix addition instruction, a scalar multiplication instruction, a scalar addition instruction, a normalization instruction, a non-linear activation instruction, and the like. For details, please refer to related descriptions in the above examples. Optionally, the process of calling related operation instructions in the computation device (such as an operation unit) to process the voice to be identified will not be further described herein. For details, please refer to the specific descriptions of calling related instruction in the above examples.
Some examples involved in the present disclosure are described below.
In the step S102, the computation device obtains a voice to be identified input by a user. In an optional example, the communication unit may be the storage medium (the off-chip memory) shown in
In an optional example, the computation device may be the computation device shown in
Some examples involved in the step S104 are described below.
In an implementation of the step S104, the computation device may first call the related first operation instruction to pre-process the voice to be identified to obtain an intermediate voice to be identified. Further, the computation device may call the second operation instruction associated with the network model to perform voice identification processing on the intermediate voice to obtain target text information. The implementations are described in detail below.
Firstly, some examples involved in the pre-processing are introduced below. The pre-processing may be time-frequency conversion, which is to convert a time-domain signal into a frequency-domain signal. Specifically, the computation device may call the related first operation instruction associated with a time-frequency conversion algorithm to perform time-frequency conversion processing on the voice to be identified, so as to obtain an intermediate voice which belongs to a frequency-domain signal. The time-frequency conversion algorithm includes, but is not limited to, one or more of the following algorithms: an FFT (Fast Fourier Transform) algorithm, a rectangular window algorithm, a Hamming window algorithm, and a neural network algorithm. The first operation instruction composing the time-frequency conversion algorithm includes, but is not limited to, one or more of the following instructions: a matrix multiplication instruction, a matrix addition instruction, a scalar multiplication instruction, a scalar addition instruction, a convolution operation instruction, a fully connected instruction, a pooling instruction, or other functional operation instructions. For instance, the computation device in the present disclosure may call and execute the following first operation instruction to implement the time-frequency conversion algorithm such as the FFT algorithm, the rectangular window algorithm, and the Hamming window algorithm, or to implement operation instructions such as a matrix multiplication instruction, a matrix addition instruction, a scalar multiplication instruction, and a scalar addition instruction. Correspondingly, the computation device may call the following first operation instruction to implement the neural network algorithm: a convolution operation instruction, a fully connected instruction, a pooling instruction, a batch normalization instructions, etc.
Secondly, some examples involved in voice identification are introduced below. Specifically, the process of the computation device calling the second operation instruction associated with the network model to perform voice identification processing on the intermediate voice is essentially to determine a mapping relationship between pronunciations and words of the voice to obtain and output corresponding target text information.
It should be understood that the voice to be identified is an audio file formed by pronunciations of a plurality of words. In practical applications, the process of determining the mapping relationship between pronunciations and words of the voice may be implemented by any one or more of the following devices: an encoder, a decoder, a language model, an acoustic model, a voice model, a neural network model, a non-neural network model, or other network models, which is not limited herein.
In an optional example, the network model includes, but is not limited to, a neural network model and a non-neural network model. The neural network model includes, but is not limited to, a deep neural network (DNN) model, a convolutional neural network (CNN) model, a recurrent neural network (RNN) model, a back-propagation (BP) neural network model, a long and short term memory (LSTM) network model, a gate repeat unit (GRU) model, or other neural network models, which is not limited in the present disclosure. Optionally, the neural network model may be composed of any one or more of the following functional layers: a convolution operation layer, a pooling layer, an activation softmax layer, a batch normalization layer, and a fully connected layer, where an operation of each function layer is implemented by at least one pre-stored operation instruction. In addition, a corresponding operation instruction may be designed for each functional layer in the present disclosure, so as to implement the operation in the functional layer. For instance, a fully connected instruction is designed for a fully connected layer, so as to call the fully connected layer to implement the operation of the fully connected layer; and the operation of the convolution operation layer can be implemented by the convolution operation instruction, etc.
The non-neural network model includes, but is not limited to, any of the following: a Markov model, a hidden Markov model, an n-gram model, a Bayes model, and the like. The hidden Markov model, the n-gram model, and the Bayesian model all include the following Bayesian formula: P(W)=P(W1)*P(W2|W1)*P(W3|W1,W2)*P(W4|W2, W3)* . . . *P(Wn|Wn−1, Wn−2). W represents a word or a pronunciation of a word, and P(Wn|Wn−1, Wn−2) represents a probability or a score corresponding to an nth word predicted according to an n−1th word and an n−2th word.
In an optional example, the computation device may call the following second operation instruction to implement the above neural network model: a matrix multiplication instruction, a matrix addition instruction, a scalar multiplication instruction, a scalar addition instruction, a convolution operation instruction, a fully connected instruction, a pooling instruction, or other functional operation instructions. For instance, the computation device in the present disclosure may call and execute the following second operation instruction to implement the non-neural network models such as the n-gram model and the hidden Markov model, or implement an operation instruction such as a matrix multiplication instruction, a matrix addition instruction, a scalar multiplication instruction, and a scalar addition instruction. Correspondingly, the computation device may call the following second operation instruction to implement the neural network model: a convolution operation instruction, a fully connected instruction, a pooling instruction, a batch normalization instructions, etc.
In a specific implementation of voice identification, the computation device may input the voice to be identified (which may specifically be a pronunciation of each word that composes the voice to be identified) into a network model to call the corresponding second operation instruction to perform voice identification processing on the pronunciation of cach word in the voice to be identified, so as to iteratively look up a probability or a score of one or more candidate words corresponding to the pronunciation of each word in time series. After the above processes are completed, a search space for all candidate words may be generated in the time series. The search space includes a plurality of text information generated according to the time series.
For instance, in the voice identification process, the voice to be identified input by the computation device is “ A”. Correspondingly, the second operation instruction in the network model may be called to identify that candidate words corresponding to the pronunciation of the word “” can be “” with a probability of 0.9 and “” with a probability of 0.1; correspondingly, candidate words identified corresponding to the word “” can be “” with a probability of 0.7, “” with a probability of 0.1, “” with a probability of 0.1, and “” with a probability of 0.1; and candidate words identified corresponding to the word “ A” can be “ A” with a probability of 0.8, “ A” with a probability of 0.1, and “ A” with a probability of 0.1. Correspondingly, the search space generated in the time series may include the following plurality of text information: “ A”, “ A”, “A”, “A”, and other 2*4*3=24 pieces of text information.
Further, the computation device may also look up target text information from a plurality of text information. Specifically, the computation device may call a related third operation instruction in the decoder to calculate a respective score of each piece of text information, and select text information whose score exceeds a scoring condition as the target text information for output. An amount of the target text information is not limited herein, and the scoring condition is customized by the user side or the terminal side, such as a score exceeding a preset threshold, a highest score, or a lowest score. The third operation instruction may be an operation instruction associated with a sorting algorithm. In other words, the computation device may call a related third operation instruction to implement a sorting algorithm such as a Viterbi algorithm, a beam search algorithm, an A* algorithm, a WFST algorithm, an n-gram algorithm, etc., which is not limited in the present disclosure.
For the third operation instruction, please refer to related introductions in the examples described in
In an optional example, the computation device may display the target text information on a terminal display interface in real time or periodically for a user to view. Optionally, the target text information may be several pieces (such as 5 pieces) of text information with higher scores selected by the computation device from a plurality of text information for display.
It should be noted that, in practical applications, the specific implementation processes involved in this present disclosure, such as pre-processing, a mapping relationship between pronunciations and words of the voice, and the determination of target text information, can be set in any one or more of the following devices to obtain processed second language information: an encoder, a decoder, an acoustical model, an attention model, or other network models, which is not limited herein.
In an optional example, a specific implementation of the step S104 is briefly described below combined with the above examples.
In a specific implementation, the computation device fetches a corresponding operation instruction from the register unit (or the instruction storage unit) through the controller unit and the data access unit, where the operation instruction is configured to process the voice to be identified (which may specifically be voice identification processing). For the operation instruction, please refer to the related introduction in the above examples; for instance, the instruction may be the operation instruction associated with a network model. The count of the operation instructions is not limited herein.
Further, after the controller unit fetches the operation instruction, the controller unit sends the operation instruction to the operation unit to perform voice conversion processing on the voice to be identified in the operation unit according to the computation topology corresponding to the operation instruction, so as to obtain corresponding target text information.
A specific implementation process of the step S104 is described in detail below with the operation instruction being a convolution operation instruction as an instance.
In a specific implementation, referring to the computation device shown in
In another specific implementation, referring to the computation device shown in
For the implementation of calling related operation instructions in the computation device to process the first language information, please refer to related descriptions of the above
Optionally, the computation device further includes a storage medium 611 (optional), a register unit 612, an interconnection module 614, a controller 615, and a data access unit 616. For the above function units, please refer to related descriptions of the examples in
For instance, the communication unit may be a storage medium or be an (IO) unit of the computation device, which is not limited herein.
In an optional example, the computation device further includes a register unit and a controller unit, where
In an optional example, the first operation instruction is an instruction for forming a time-frequency conversion algorithm, where the time-frequency conversion algorithm includes at least one of the following: a Fast Fourier Transform algorithm, a rectangular window algorithm, a Hamming window algorithm, and a neural network algorithm.
In an optional example,
In an optional example, when the network model is a neural network model, the neural network model includes any one or more of the following functional layers: a convolution operation layer, a pooling layer, an activation softmax layer, a batch normalization batch normalization layer, and a fully connected layer; where the function layers are composed of at least one pre-stored operation instruction.
In an optional example, the computation device further includes a data access unit and a storage medium,
In an optional example, the operation unit includes a primary operation module and a plurality of secondary operation modules, where the primary operation module is interconnected with the plurality of secondary operation modules by an interconnection module, and when the operation instruction is a convolution operation instruction,
In an optional example,
In an optional example, the primary operation module includes a first operation unit, where the first operation unit includes a vector addition unit and an activation unit,
In an optional example, the primary operation module includes a first storage unit, a first operation unit, a first data dependency determination unit, and a first storage unit; where
The data dependency determination unit is configured to ensure that there is no consistency conflict in reading data from and writing data to the first storage unit, read an input neuron vector from the first storage unit, and send the vector to the secondary operation modules through the interconnection module; and
In an optional example, the secondary operation modules include a second operation unit, where the second operation unit includes a vector multiplication unit and an accumulation unit,
In an optional example, each secondary operation unit includes a second operation unit, a second data dependency determination unit, a second storage unit, and a third storage unit;
In an optional example, the first data dependency or the second data dependency ensures that there is no consistency conflict in reading and writing in the following manners: storage addresses corresponding to data/instructions stored in the corresponding storage unit do not overlap; or determining whether there is dependency between a control signal that has not been executed and data of a control signal that is being executed; if there is no dependency, the control signal is allowed to be issued immediately; otherwise, the control signal is not allowed to be issued until all control signals on which the control signal is dependent have been executed; where
In an optional example, the plurality of secondary operation modules are configured to compute respective output scalars in parallel by configuration using the same input data and respective convolution kernels.
In an optional example, an activation function active used by the primary operation module may be any of the following non-linear functions: sigmoid, tanh, relu, softmax, or may be a linear function.
In an optional example, the interconnection module forms a data channel for continuous or discrete data between the primary operation module and the plurality of secondary operation modules. The interconnection module has any of the following structures: a tree structure, a ring structure, a grid structure, a hierarchical interconnection, and a bus structure.
For those parts which are not shown or described in the examples of the present disclosure, please refer to related descriptions of the above examples.
An example of the present disclosure further provides a computer storage medium on which a computer program is stored for electronic data exchange. The computer program may cause a computer to perform part or all of the steps of any information processing method described in the foregoing method examples.
An example of the present disclosure further provides a computer program product, where the computer program product includes a non-transitory computer-readable storage medium storing a computer program. The computer program may cause a computer to perform part or all of the steps of any information processing method described in the foregoing method examples.
An example of the present disclosure also provides an acceleration device which includes: a memory which stores executable instructions, and a processor configured to execute the executable instructions in the storage unit according to the information processing method.
The processing unit may be a single one, or may include two or more processing units. In addition, the processor may also include a general-purpose processor (CPU) or a graphics processing unit (GPU), a field programmable gate array (FPGA), or an application-specific integrated circuit (ASIC) to set up and operate a neural network. The processor may also include an on-chip memory for caching (including a memory in the processing device).
In some examples, the present disclosure provides a chip which includes the above neural network processor configured to execute the information processing method.
In some examples, the present disclosure provides a chip package structure which includes the above chip.
In some examples, the present disclosure provides a board card which includes the above chip package structure.
In some examples, the present disclosure provides an electronic device which includes the above board card.
The electronic device may include a data processing device, a robot, a computer, a printer, a scanner, a tablet, a smart terminal, a mobile phone, a traffic recorder, a navigator, a sensor, a webcam, a server, a cloud-based server, a camera, a video camera, a projector, a watch, a headphone, a mobile storage, a wearable device, a vehicle, a household appliance, and/or a medical equipment.
The vehicle may include an airplane, a ship, and/or a car. The household electrical appliance may include a television, an air conditioner, a microwave oven, a refrigerator, an electric rice cooker, a humidifier, a washing machine, an electric lamp, a gas cooker, and a range hood. The medical equipment may include a nuclear magnetic resonance spectrometer, a B-ultrasonic scanner, and/or an electrocardiograph.
It should be noted that, the foregoing examples of method, for the sake of conciseness, are all described as a series of action combinations, but those skilled in the art should know that since according to the present disclosure, the steps may be performed in a different order or simultaneously, the disclosure is not limited by the described order of action. Secondly, those skilled in the art should also understand that the examples described in the specification are all optional, and the actions and modules involved are not necessarily required for this disclosure.
In the examples above, the description of each example has its own emphasis. For a part that is not described in detail in one example, reference may be made to related descriptions in other examples.
It should be understood that in the examples provided by the present disclosure, the disclosed device may be implemented in another manner. For instance, the examples above are merely illustrative. For instance, the division of the units is only a logical function division. In a real implementation, there may be another manner for division. For instance, a plurality of units or components may be combined or may be integrated in another system, or some features can be ignored or not performed. In addition, the displayed or discussed mutual coupling or direct coupling or communication connection may be implemented through indirect coupling or communication connection of some interfaces, devices or units, and may be electrical or other forms.
The units described as separate components may or may not be physically separated. The components shown as units may or may not be physical units. In other words, the components may be located in one place, or may be distributed to a plurality of network units. According to certain needs, some or all of the units can be selected for realizing the purposes of the examples of the present disclosure.
In addition, the functional units in each example of the present application may be integrated into one processing unit, or each of the units may exist separately and physically, or two or more units may be integrated into one unit. The integrated units above may be implemented in the form of hardware or in the form of software program modules.
When the integrated units are implemented in the form of a software program module and sold or used as an independent product, they may be stored in a computer-readable memory. Based on such understanding, the essence of the technical solutions of the present disclosure, or a part of the present disclosure that contributes to the prior art, or all or part of technical solutions, can all or partly embodied in the form of a software product that is stored in a memory. The software product includes several instructions to enable a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the examples of the present disclosure. The foregoing memory includes: a USB flash drive, a read-only memory (ROM), a random access memory (RAM), a mobile hard disk, a magnetic disk, or an optical disc, and other media that can store program codes.
A person of ordinary skill in the art may understand that all or part of the steps of the foregoing examples of method may be completed by a program instructing related hardware. The program may be stored in a computer-readable memory, and the memory may include a flash disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, an optical disk, or the like.
The examples of the present disclosure have been described in detail above. Specific examples have been used in the specification to explain the principles and implementation manners of the present disclosure. The descriptions of the above examples are only used to facilitate understanding of the methods and core ideas of the present disclosure. Persons of ordinary skill in the art may change the implementation and application scope according to the ideas of the present application. In summary, the content of this specification should not be construed as a limitation on the present disclosure.
This application is a continuation of U.S. patent application Ser. No. 16/760,235, filed on Apr. 29, 2020, which claims priority to U.S. National Phase Application No. PCT/CN2018/105463 filed on Sep. 13, 2018, the contents of the above-referenced applications are hereby expressly incorporated herein by reference in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
5047954 | Corona et al. | Sep 1991 | A |
5283839 | Edelman et al. | Feb 1994 | A |
6047081 | Groezinger et al. | Apr 2000 | A |
6477555 | Hartung | Nov 2002 | B1 |
6954509 | Bohnhoff et al. | Oct 2005 | B2 |
7065545 | Quintero-De-La-Garza | Jun 2006 | B2 |
8200726 | Gunnels et al. | Jun 2012 | B2 |
8706971 | Nayak | Apr 2014 | B1 |
9094576 | Karakotsios | Jul 2015 | B1 |
9430164 | Botelho et al. | Aug 2016 | B1 |
9542626 | Martinson et al. | Jan 2017 | B2 |
9542933 | Mortensen | Jan 2017 | B2 |
9824684 | Yu | Nov 2017 | B2 |
9864933 | Cosic | Jan 2018 | B1 |
9959272 | Canek et al. | May 2018 | B1 |
10032463 | Rastrow | Jul 2018 | B1 |
10467201 | Merritt et al. | Nov 2019 | B1 |
10664963 | Rossi et al. | May 2020 | B1 |
11113578 | Brandt et al. | Sep 2021 | B1 |
20020015326 | Rolandi et al. | Feb 2002 | A1 |
20030208664 | Singh | Nov 2003 | A1 |
20050149465 | Nugent | Jul 2005 | A1 |
20060110068 | Luo et al. | May 2006 | A1 |
20070061550 | Barlow et al. | Mar 2007 | A1 |
20070156685 | Inoue et al. | Jul 2007 | A1 |
20080208597 | Chino | Aug 2008 | A1 |
20080243279 | Sherman | Oct 2008 | A1 |
20090113180 | Banerjee et al. | Apr 2009 | A1 |
20110040821 | Eichenberger et al. | Feb 2011 | A1 |
20110112981 | Park et al. | May 2011 | A1 |
20110135167 | Imaoka | Jun 2011 | A1 |
20110153707 | Ginzburg et al. | Jun 2011 | A1 |
20110193876 | Handa et al. | Aug 2011 | A1 |
20110292062 | Hirotani | Dec 2011 | A1 |
20120054311 | Mizuno et al. | Mar 2012 | A1 |
20120216188 | Tsirkin | Aug 2012 | A1 |
20140233820 | Wu et al. | Aug 2014 | A1 |
20140257813 | Mortensen | Sep 2014 | A1 |
20140281386 | Fox et al. | Sep 2014 | A1 |
20140281406 | Dixon et al. | Sep 2014 | A1 |
20140337989 | Orsini et al. | Nov 2014 | A1 |
20150060060 | Colvin et al. | Mar 2015 | A1 |
20150269119 | Derby et al. | Sep 2015 | A1 |
20150370559 | Gschwind et al. | Dec 2015 | A1 |
20160078863 | Chung et al. | Mar 2016 | A1 |
20160125253 | Visvanathan et al. | May 2016 | A1 |
20160188725 | Wang et al. | Jun 2016 | A1 |
20160342418 | Galoppo Von Borries et al. | Nov 2016 | A1 |
20160378661 | Gray et al. | Dec 2016 | A1 |
20170083965 | Sun | Mar 2017 | A1 |
20170103311 | Henry | Apr 2017 | A1 |
20170139713 | Gschwind et al. | May 2017 | A1 |
20170177348 | Ould-Ahmed-Vall et al. | Jun 2017 | A1 |
20170193368 | Rejith et al. | Jul 2017 | A1 |
20170193397 | Deshmukh et al. | Jul 2017 | A1 |
20170194001 | Mortensen | Jul 2017 | A1 |
20170200067 | Zhou et al. | Jul 2017 | A1 |
20170206227 | Hwang et al. | Jul 2017 | A1 |
20170262284 | Bradbury et al. | Sep 2017 | A1 |
20170337657 | Cornell | Nov 2017 | A1 |
20180068463 | Risser | Mar 2018 | A1 |
20180082172 | Patel | Mar 2018 | A1 |
20180136912 | Venkataramani et al. | May 2018 | A1 |
20180150947 | Lu et al. | May 2018 | A1 |
20180189580 | Lee et al. | Jul 2018 | A1 |
20180240257 | Li et al. | Aug 2018 | A1 |
20180300850 | Johnson et al. | Oct 2018 | A1 |
20180315165 | Navarrete Michelini et al. | Nov 2018 | A1 |
20180336464 | Karras et al. | Nov 2018 | A1 |
20180350030 | Simons et al. | Dec 2018 | A1 |
20190005383 | Kantor et al. | Jan 2019 | A1 |
20190050723 | Kong et al. | Feb 2019 | A1 |
20190079999 | Min et al. | Mar 2019 | A1 |
20190080165 | Takahashi et al. | Mar 2019 | A1 |
20190138922 | Liu et al. | May 2019 | A1 |
20190139191 | Liu et al. | May 2019 | A1 |
20190188567 | Yao et al. | Jun 2019 | A1 |
20190236814 | Shlens et al. | Aug 2019 | A1 |
20200193225 | Olmeda Reino et al. | Jun 2020 | A1 |
20200250539 | Liu et al. | Aug 2020 | A1 |
20210097326 | Chen et al. | Apr 2021 | A1 |
20210182077 | Chen et al. | Jun 2021 | A1 |
20210182666 | Han et al. | Jun 2021 | A1 |
20210192245 | Chen et al. | Jun 2021 | A1 |
20210209448 | Navarrete Michelini | Jul 2021 | A1 |
20220058772 | Chuang et al. | Feb 2022 | A1 |
Number | Date | Country |
---|---|---|
2317593 | Jun 2009 | CA |
85109589 | Jun 1986 | CN |
101072166 | Nov 2007 | CN |
101078985 | Nov 2007 | CN |
101084485 | Dec 2007 | CN |
101187861 | May 2008 | CN |
101208898 | Jun 2008 | CN |
101295405 | Oct 2008 | CN |
101315770 | Dec 2008 | CN |
101340585 | Jan 2009 | CN |
101369233 | Feb 2009 | CN |
101515998 | Aug 2009 | CN |
101556565 | Oct 2009 | CN |
101615113 | Dec 2009 | CN |
101617311 | Dec 2009 | CN |
101620524 | Jan 2010 | CN |
101685388 | Mar 2010 | CN |
101779434 | Jul 2010 | CN |
101794239 | Aug 2010 | CN |
101819570 | Sep 2010 | CN |
101833441 | Sep 2010 | CN |
101833468 | Sep 2010 | CN |
101876892 | Nov 2010 | CN |
101944067 | Jan 2011 | CN |
102004628 | Apr 2011 | CN |
102005743 | Apr 2011 | CN |
102012893 | Apr 2011 | CN |
102014475 | Apr 2011 | CN |
102098623 | Jun 2011 | CN |
102103479 | Jun 2011 | CN |
101315770 | Jan 2012 | CN |
102346894 | Feb 2012 | CN |
102360344 | Feb 2012 | CN |
102375805 | Mar 2012 | CN |
102508643 | Jun 2012 | CN |
102520906 | Jun 2012 | CN |
102541814 | Jul 2012 | CN |
102724482 | Oct 2012 | CN |
102750127 | Oct 2012 | CN |
102831387 | Dec 2012 | CN |
102880341 | Jan 2013 | CN |
103002147 | Mar 2013 | CN |
103064086 | Apr 2013 | CN |
103238133 | Aug 2013 | CN |
103294648 | Sep 2013 | CN |
103309813 | Sep 2013 | CN |
103368701 | Oct 2013 | CN |
103502935 | Jan 2014 | CN |
103530276 | Jan 2014 | CN |
103646007 | Mar 2014 | CN |
103646009 | Mar 2014 | CN |
103699360 | Apr 2014 | CN |
103856727 | Jun 2014 | CN |
103874964 | Jun 2014 | CN |
103975302 | Aug 2014 | CN |
103999037 | Aug 2014 | CN |
104011649 | Aug 2014 | CN |
104011651 | Aug 2014 | CN |
104038864 | Sep 2014 | CN |
104040482 | Sep 2014 | CN |
104077138 | Oct 2014 | CN |
104081337 | Oct 2014 | CN |
104094182 | Oct 2014 | CN |
104123250 | Oct 2014 | CN |
104157285 | Nov 2014 | CN |
104169907 | Nov 2014 | CN |
104219505 | Dec 2014 | CN |
104350492 | Feb 2015 | CN |
104376326 | Feb 2015 | CN |
104376842 | Feb 2015 | CN |
104423926 | Mar 2015 | CN |
104461970 | Mar 2015 | CN |
104537630 | Apr 2015 | CN |
104583938 | Apr 2015 | CN |
104850845 | Aug 2015 | CN |
104915322 | Sep 2015 | CN |
104937542 | Sep 2015 | CN |
104937543 | Sep 2015 | CN |
104956323 | Sep 2015 | CN |
105068998 | Nov 2015 | CN |
105069304 | Nov 2015 | CN |
105094749 | Nov 2015 | CN |
105122228 | Dec 2015 | CN |
105160340 | Dec 2015 | CN |
105354006 | Feb 2016 | CN |
105405443 | Mar 2016 | CN |
105446970 | Mar 2016 | CN |
105468335 | Apr 2016 | CN |
105468546 | Apr 2016 | CN |
105488565 | Apr 2016 | CN |
105512676 | Apr 2016 | CN |
105512723 | Apr 2016 | CN |
105609141 | May 2016 | CN |
105703978 | Jun 2016 | CN |
105719327 | Jun 2016 | CN |
105721354 | Jun 2016 | CN |
105787888 | Jul 2016 | CN |
105830040 | Aug 2016 | CN |
105849690 | Aug 2016 | CN |
105869024 | Aug 2016 | CN |
105869117 | Aug 2016 | CN |
105892989 | Aug 2016 | CN |
105895082 | Aug 2016 | CN |
105912667 | Aug 2016 | CN |
105913039 | Aug 2016 | CN |
105930902 | Sep 2016 | CN |
105956608 | Sep 2016 | CN |
205582481 | Sep 2016 | CN |
106022468 | Oct 2016 | CN |
106067031 | Nov 2016 | CN |
106095834 | Nov 2016 | CN |
106096542 | Nov 2016 | CN |
106126507 | Nov 2016 | CN |
106127672 | Nov 2016 | CN |
106228512 | Dec 2016 | CN |
106296667 | Jan 2017 | CN |
106302837 | Jan 2017 | CN |
106328127 | Jan 2017 | CN |
106408086 | Feb 2017 | CN |
106408595 | Feb 2017 | CN |
106484682 | Mar 2017 | CN |
106503055 | Mar 2017 | CN |
106529668 | Mar 2017 | CN |
106548208 | Mar 2017 | CN |
104169907 | Apr 2017 | CN |
106560809 | Apr 2017 | CN |
106575219 | Apr 2017 | CN |
106604216 | Apr 2017 | CN |
106650581 | May 2017 | CN |
106650922 | May 2017 | CN |
106709532 | May 2017 | CN |
106778472 | May 2017 | CN |
106778928 | May 2017 | CN |
106780367 | May 2017 | CN |
106782501 | May 2017 | CN |
106815321 | Jun 2017 | CN |
106844627 | Jun 2017 | CN |
106845549 | Jun 2017 | CN |
106850673 | Jun 2017 | CN |
106887225 | Jun 2017 | CN |
106898350 | Jun 2017 | CN |
106898353 | Jun 2017 | CN |
106920545 | Jul 2017 | CN |
106951961 | Jul 2017 | CN |
106952224 | Jul 2017 | CN |
106952235 | Jul 2017 | CN |
106970896 | Jul 2017 | CN |
106990940 | Jul 2017 | CN |
106991077 | Jul 2017 | CN |
106991476 | Jul 2017 | CN |
106991477 | Jul 2017 | CN |
106991478 | Jul 2017 | CN |
107003843 | Aug 2017 | CN |
107038159 | Aug 2017 | CN |
107067825 | Aug 2017 | CN |
107111486 | Aug 2017 | CN |
107133018 | Sep 2017 | CN |
107169503 | Sep 2017 | CN |
107171932 | Sep 2017 | CN |
107194938 | Sep 2017 | CN |
107203775 | Sep 2017 | CN |
107221337 | Sep 2017 | CN |
107239824 | Oct 2017 | CN |
107240185 | Oct 2017 | CN |
107247930 | Oct 2017 | CN |
107301383 | Oct 2017 | CN |
107301453 | Oct 2017 | CN |
107301454 | Oct 2017 | CN |
107305484 | Oct 2017 | CN |
106447034 | Jul 2019 | CN |
106920545 | Jul 2020 | CN |
0097858 | Dec 1991 | EP |
0475732 | Dec 1998 | EP |
2851786 | Mar 2015 | EP |
2515145 | Dec 2015 | GB |
2006031475 | Feb 2006 | JP |
2005086443 | Sep 2005 | WO |
101187861 | May 2008 | WO |
2010064728 | Jun 2010 | WO |
2014105123 | Jul 2014 | WO |
2017021322 | Feb 2017 | WO |
2017027638 | Feb 2017 | WO |
2017048647 | Mar 2017 | WO |
2017077121 | May 2017 | WO |
2017084331 | May 2017 | WO |
2017124648 | Jul 2017 | WO |
Entry |
---|
Sainath, Tara N., et al. “Improvements to deep convolutional neural networks for LVCSR.” 2013 IEEE workshop on automatic speech recognition and understanding. IEEE, 2013. (Year: 2013). |
O. Abdel-Hamid, A. -r. Mohamed, H. Jiang, L. Deng, G. Penn and D. Yu, “Convolutional Neural Networks for Speech Recognition,” in IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 22, No. 10, pp. 1533-1545, Oct. 2014, doi: 10.1109/TASLP.2014.2339736. (Year: 2014). |
Huang, Jui-Ting, Jinyu Li, and Yifan Gong. “An analysis of convolutional neural networks for speech recognition.” 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2015. (Year: 2015). |
Qian, Yanmin, and Philip C. Woodland. “Very deep convolutional neural networks for robust speech recognition.” 2016 IEEE Spoken Language Technology Workshop (SLT). IEEE, 2016. (Year: 2016). |
CN201711036374.9—Office Action mailed on Mar. 23, 2023, 8 pages (With Brief English Explanation). |
CN201880002336.8—Office Action mailed on Mar. 31, 2023, 8 pages (With Brief English Explanation). |
CN202010189354.0—Office Action mailed on Mar. 30, 2023, 8 pages (With Brief English Explanation). |
CN202010190143.9—Office Action mailed on Mar. 23, 2023, 10 pages (With Brief English Explanation). |
CN202010309559.8—First Office Action mailed on Mar. 8, 2023, 8 pages (With Brief English Translation). |
CN201711211933.5—Second Office Action mailed on Jun. 9, 2021, 12 pages. |
CN201711212125.0—Second Office Action mailed on Jul. 12, 2021, 37 pages. |
CN20171124402.0—Notice of Grant, mailed on Jul. 15, 2022, 5 pages. |
CN201810799954.1—First Office Action mailed on Feb. 2, 2021, 40 pages. |
CN201810849488.3—Second Office Action mailed on Mar. 2, 2021, 11 pages. |
CN201810849496.8—Notice of Grant mailed on Jul. 8, 2021, 4 pages. |
CN201810849497.2—Notice of Grant mailed on Nov. 5, 2020, 4 pages. |
CN201810849498.7—Notice of Grant mailed on May 8, 2021, 4 pages. |
CN201810849499.1—First Office Action mailed on May 21, 2020, 11 pages. |
CN201810849509.1—First Office Action mailed on Mar. 30, 2020, 9 pages. |
CN201810849509.1—Second Office Action mailed on Oct. 9, 2020, 11 pages. |
CN201811436410.5—Notice of Grant mailed on Nov. 5, 2020, 4 pages. |
CN201811440571.1—Notice of Grant mailed on May 7, 2021, 4 pages. |
CN201910070501.X—Notice of Grant mailed on Feb. 9, 2021, 4 pages. |
CN202010190142.4—Chinese Office Action mailed on Dec. 20, 2022, 11 pages (With brief English explanation). |
CN202010336354.9—Chinese Office Action mailed on Dec. 30, 2022, 11 pages (With brief English explanation). |
Xufei Liu, “Say goodbye to Photoshop, Decryption of neural network based skin adjustment technic”, Computer Fan, Apr. 15, 2017, 2 pages. |
CN 201911062123.7—First Office Action, mailed Oct. 9, 2021, 16 pages. (with English translation). |
CN 201911058910.4—First Office Action, mailed Dec. 2, 2021, 17 pages. (with English translation). |
CN 01811440484.6—First Office Action, mailed Nov. 1, 2021, 20 pages. (with English translation). |
Cn 201711212991.X—Third Office Action, mailed Apr. 2, 2021, 33 pages. (with English translation). |
CN 201810800665.9—Second Office Action, mailed Nov. 11, 2021, 18 pages. (with English translation). |
CN 201810799988.0—First Office Action, mailed Apr. 6, 2021, 22 pages. (with English translation). |
CN 201810801236.3—Second Office Action, mailed Feb. 3, 2021, 20 pages. (with English translation). |
CN 201810799954.1—First Office Action, mailed Feb. 2, 2021, 40 pages. (with English translation). |
CN 201810799954.1—Second Office Action, mailed Nov. 10, 2021, 20 pages. (with English translation). |
CN 201810800664.4—First Office Action, mailed Feb. 1, 2021, 67 pages. (with English translation). |
CN 201810800664.4—Second Office Action, mailed Nov. 24, 2021, 19 pages. (with English translation). |
CN 201810801238.2—First Office Action, mailed Mar. 18, 2021, 88 pages. (with English translation). |
CN 201810849509.1—Third Office Action, mailed Mar. 22, 2021, 19 pages. (with English translation). |
EP 18873474.3—Extended European Search Report, mailed Sep. 2, 2021, 7 pages. |
EP 18873474.3—Communication pursuant to Rules 70(2) and 70a(2) EPC, mailed Sep. 21, 2021, 1 page. |
CN 201810849484.5—Second Office Action, mailed Apr. 6, 2021, 13 pages. (with English translation). |
CN 201810801238.2—Second Office Action, mailed Sep. 14, 2021, 25 pages. (with English translation). |
CN 201911058839.X—First Office Action, mailed Oct. 26, 2021, 21 pages. (with English translation). |
Cn 201711212991.X—Rejection Decision, mailed Nov. 26, 2021, 10 pages. (with brief English explanation). |
CN 201810800665.9—First Office Action, mailed Feb. 8, 2021, 33 pages. (with brief English explanation). |
Zhijian Lu, “The Research on Parallel Architecture for FPGA-Based Convolutional Neural Networks”, Apr. 1, 2014, 51 pages. (with English Abstract). |
Unknown Author, “The Latest Development of Speech Recognition Framework—Deep Full-Sequence Convolutional Neural Network Debut”, Aug. 5, 2016, 9 pages. (with English Abstract). |
CN 201911058910.4—Second Office Action, mailed Jul. 4, 2022, 6 pages. (With brief English Explanation). |
CN 201711244020.3, First Office Action, mailed Jan. 7, 2022, 14 pages, (with English translation). |
CN 201810800664.4, Office Action, mailed Apr. 8, 2022, 8 pages, (With brief English explanation). |
CN 201810800665.9—Office Action, mailed Apr. 8, 2022, 8 pages. (With brief English explanation). |
CN 201810801238.2, Office Action, mailed Jan. 10, 2022, 10 pages, (With brief English explanation). |
CN201711212125.0, Chinese Office Action Issued Apr. 12, 2022, 11 pages, (With brief English explanation). |
CN201810799954.1, Chinese Office Action Issued Apr. 8, 2022, 8 pages. (With brief English explanation). |
Development Tutorial for ARM Cortex-A9 Multi-cores embedded system, 2016, 5 Pages. (With brief English explanation ). |
Frank Vahid et al., “Embedded Systems Design: A Unified Hardware/Software Introduction” , 2004, pp. 42. |
Chou et al., “VEGAS: Soft Vector Processor with Scratchpad Memory”, FPGA '11: Proceedings of the 19th ACM/SIGDA international symposium on Field programmable gate arrays, Feb. 2011, 10 pages. |
“Learning BLAS library—ROT”, Cocoonyang, dated Mar. 17, 2017, 1 page. |
CN 201711212123.1—First Office Action mailed on Dec. 26, 2019, 37 pages. |
“Learning BLAS library—ROT”, Cocoonyang, dated Mar. 17, 2017, 2 pages (With Brief English Explanation). |
CN201711211933.5—First Office Action mailed on Dec. 16, 2020, 19 pages. |
CN201711212122.7—First Office Action mailed on Jul. 17, 2020, 30 pages. |
CN201711212123.1—Second Office Action mailed on May 21, 2020, 36 pages. |
CN201711212125.0—First Office Action mailed on Dec. 16, 2020, 36 pages. |
CN201711212656.X—First Office Action mailed on Nov. 27, 2019, 15 pages. |
CN201711212656.X—Second Office Action mailed on Jun. 28, 2020, 21 pages. |
CN201711212660.6—First Office Action mailed on Dec. 16, 2020, 31 pages. |
CN201711212991.X—First Office Action mailed on Aug. 26, 2020, 15 pages. |
CN201711212991.X—Third Office Action mailed on Apr. 2, 2021, 33 pages. |
CN201711212994.3—Second Office Action mailed on Jul. 13, 2021, 16 pages. |
CN201711212994.3, Chinese Office Action mailed Nov. 20, 2020, 42 pages, (With Brief English Explanation). |
CN201711212995.8—First Office Action mailed on Nov. 27, 2019, 15 pages. |
CN201711212995.8—Second Office Action mailed on Jun. 28, 2020, 22 pages. |
CN201810799987.6—First Office Action mailed on May 11, 2020, 17 pages. |
CN201810799987.6—Second Office Action mailed on Oct. 19, 2020, 11 pages. |
CN201810800001.2—First Office Action mailed on May 13, 2020, 25 pages. |
CN201810800001.2—Second Office Action mailed on Nov. 4, 2020, 28 pages. |
CN201810801236.3—First Office Action mailed on Apr. 23, 2020, 23 pages. |
CN201810801239.7—First Office Action mailed on Apr. 29, 2020, 24 pages. |
CN201810801239.7—Second Office Action mailed on Oct. 16, 2020, 13 pages. |
CN201810849479.4—First Office Action mailed on Apr. 26, 2020, 16 pages. |
CN201810849479.4—Second Office Action mailed on Nov. 4, 2020, 16 pages. |
CN201810849480.7—First Office Action mailed on May 22, 2020, 10 pages. |
CN201810849483.0—First Office Action mailed on Jul. 30, 2020, 16 pages. |
CN201810849484.5—First office Action mailed on Jul. 3, 2020, 10 pages. |
CN201810849485.X—First Office Action mailed on Apr. 21, 2020, 15 pages. |
CN201810849485.X—Second Office Action mailed on Jan. 7, 2021, 11 pages. |
CN201810849486.4—First Office Action mailed on Apr. 26, 2020, 19 pages. |
CN201810849486.4—Second Office Action mailed on Jan. 5, 2021, 16 pages. |
CN201810849488.3—First Office Action mailed Jul. 23, 2020, 16 pages. |
CN201810849491.5—First Office Action mailed on Apr. 22, 2020, 18 pages. |
CN201810849492.X—First Office Action mailed on Apr. 22, 2020, 15 pages. |
CN201810849492.X—Second Office Action mailed on Jan. 7, 2021, 13 pages. |
CN201810849496.8—First Office Action mailed on Aug. 3, 2020, 17 pages. |
CN201810849497.2, Chinese Office Action mailed May 26, 2020, 13 pages. |
CN201810849498.7—First Office Action mailed on Jul. 1, 2020, 12 pages. |
CN201811436410.5—First Office Action mailed on Apr. 30, 2020, 13 pages. |
CN201811440571.1—First Office Action mailed on Apr. 30, 2020, 20 pages. |
CN201811440571.1—Second Office Action mailed on Nov. 4, 2020, 12 pages. |
CN201910067288.7—First Office Action mailed on May 22, 2020, 29 pages. |
CN201910067288.7—Second Office Action mailed on Oct. 29, 2020, 27 pages. |
CN201910070501.X—First Office Action mailed on May 11, 2020, 29 pages. |
CN201910070501.X—Second Office Action mailed on Oct. 29, 2020, 9 pages. |
PCT/CN2018/105463—International Search Report and Written Opinion mailed on Dec. 3, 2018, 12 pages. |
Xufei Liu, “Say goodbye to Photoshop, Decryption of neural network based skin adjustment technic”, Computer Fan, Apr. 15, 2017, 3 pages (With Brief English Explanation). |
Tung et al., “Deep Neural Network Compression By In-parallel Pruning-quantization” IEEE Transactions On Pattern Analysis And Machine Intelligence, vol. 42, No. 3, 2018, pp. 568-579. |
U.S. Appl. No. 16/760,235—Final Office Action mailed on Feb. 21, 2024, 10 pages. |
U.S. Appl. No. 16/760,235—Non-Final Office Action mailed on Aug. 2, 2023, 9 pages. |
U.S. Appl. No. 17/119,029—Non-Final Office Action mailed on Sep. 26, 2023, 11 pages. |
U.S. Appl. No. 17/119,093—Final Office Action mailed on Feb. 22, 2024, 32 pages. |
U.S. Appl. No. 17/119,093—Non-Final Office Action mailed on Jun. 21, 2023, 35 pages. |
U.S. Appl. No. 17/119,148—Non-Final Office Action mailed on May 10, 2023, 11 pages. |
U.S. Appl. No. 17/119,148—Notice of Allowance mailed on Nov. 22, 2023, 10 pages. |
U.S. Appl. No. 17/119,193—Non-Final Office Action mailed on Aug. 23, 2023, 46 pages. |
U.S. Appl. No. 17/119,234—Non-Final Office Action mailed on Sep. 13, 2023, 34 pages. |
U.S. Appl. No. 17/119,269—Non-Final Office Action mailed on Aug. 7, 2023, 16 pages. |
U.S. Appl. No. 17/119,269—Notice of Allowance mailed on Feb. 27, 2024, 10 pages. |
U.S. Appl. No. 17/119,309—Corrected Notice of Allowability mailed on Mar. 24, 2023, 8 pages. |
U.S. Appl. No. 17/119,309—Non-Final Office Action mailed on Aug. 24, 2022, 50 pages. |
U.S. Appl. No. 17/119,309—Notice of Allowance mailed on Jun. 20, 2023, 15 pages. |
U.S. Appl. No. 17/119,309—Notice of Allowance mailed on Mar. 9, 2023, 11 pages. |
U.S. Appl. No. 17/119,347—Non-Final Office Action mailed on Aug. 15, 2023, 23 pages. |
U.S. Appl. No. 17/119,347—Notice of Allowance mailed on Feb. 23, 2024, 12 pages. |
U.S. Appl. No. 17/119,234, Final Office Action, mailed Jun. 4, 2024, 104 pages. |
Number | Date | Country | |
---|---|---|---|
20210098001 A1 | Apr 2021 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16760235 | US | |
Child | 17119213 | US |