This application claims priority to Chinese Application No. 202310993452.3 filed Aug. 8, 2023, the disclosure of which is incorporated herein by reference in its entity.
The present disclosure relates to the field of computer technologies, and more specifically, to a method, apparatus, an electronic device and a storage medium for audio processing.
Existing technologies can process the audio data, e.g., adding meow of kittens into a piece of audio data of bird chirping. However, the schemes in the existing technologies focus on speech processing, rather than music processing. After the music is processed by the existing technologies, the music before and after processing are less harmonious and consistent in musicality
Embodiments of the present disclosure provide a method, apparatus, an electronic device and a storage medium for audio processing. The embodiments may process the music data and ensure that the music data before and after processing are highly harmonious and consistent in musicality.
In a first aspect, embodiments of the present disclosure provide a method for audio processing, comprising:
In a second aspect, embodiments of the present disclosure provide an apparatus for audio processing, comprising:
In a third aspect, embodiments of the present disclosure provide an electronic device, comprising: a processor; and a memory configured to store computer-executable instructions, the computer-executable instructions, when executed, causing the processor to implement steps of the method according to the above first aspect.
In a fourth aspect, embodiments of the present disclosure provide a computer readable storage medium, wherein the computer readable stored medium stores computer-executable instructions, the computer-executable instructions, when executed by a processor, implementing steps of the method according to the above first aspect.
In one or more embodiments of the present disclosure, first music data and a processing instruction in text form associated with the first music data are obtained; a first chord progression feature and an audio feature of the first music data and a text feature of the processing instruction are extracted by a music processing model; the audio feature is processed, by the music processing model, in accordance with the first chord progression feature and the text feature, to generate second music data; wherein a similarity between a first chord progression feature of the first music data and a second chord progression feature of the second music data is greater than a similarity threshold. Accordingly, since the music processing model may extract the first chord progression feature of the first music data and generate the second music data based on the extracted first chord progression feature, the music processing model, when processing the first music data to generate the second music data, may ensure that the chord progression features of the first music data and the second music data are quite consistent. Therefore, the first music data and the second music data are highly harmonious and consistent in musicality.
Brief introduction of the drawings required in the description of the specific embodiments or the prior art are to be provided below to more clearly explain one or more embodiments of the present disclosure or the technical solutions in the prior art. It is obvious that the following drawings illustrate some embodiments of the present disclosure and those skilled in the art also may obtain other drawings on the basis of those illustrated ones without any exercises of inventive work.
To allow those skilled in the art to better understand the technical solutions in one or more embodiments of the present disclosure, the technical solutions in one or more embodiments of the present disclosure are to be described clearly and comprehensively below with reference to the drawings in one or more embodiments of the present disclosure. Apparently, the described embodiments are only part of the embodiments of the present disclosure, rather than all of them. Based on one or more embodiments of the present disclosure, all other embodiments obtained by those skilled in the art without requiring any exercises of inventive work should fall within the protection scope of the present disclosure.
Embodiments of the present disclosure provide an audio processing method, which may process music data and ensure that the music data before processing and the music data after processing are highly harmonious and consistent in musicality.
In this embodiment, first music data and a processing instruction in text form associated with the first music data are obtained; a first chord progression feature and an audio feature of the first music data and a text feature of the processing instruction are extracted by a music processing model; the audio feature is processed, by the music processing model, in accordance with the first chord progression feature and the text feature, to generate second music data; wherein a similarity between a first chord progression feature of the first music data and a second chord progression feature of the second music data is greater than a similarity threshold. Accordingly, since the music processing model may extract the first chord progression feature of the first music data and generate the second music data based on the extracted first chord progression feature, the music processing model, when processing the first music data to generate the second music data, may ensure that the chord progression features of the first music data and the second music data are quite consistent. Therefore, the first music data and the second music data are highly harmonious and consistent in musicality.
The flow in
In the above step S102, the first music data and the processing instruction in text form associated with the first music data are obtained. The first music data is the music data before processing. The first music data and the processing instruction in text form associated with the first music data may be input by users. The users may process the first music data by inputting the first music data and the processing instruction. The processing instruction may enable at least one of the processing on the first music data: adding an instrument track to the first music data, deleting an existing instrument track in the first music data, modifying music style of the first music data, and modifying music emotion of the first music data, and the music style refers to the style of melody and for example may be fast-rhythm, slow-rhythm, radical, high-pitched, soothing and low-pitched etc.; and the music emotion describes the emotional feelings the music brings to audience and for example may be happy, cheerful, sad, depressed and peaceful etc.
A piece of pure music made by piano and bass is taken as the example of the first music data. In one example where the processing instruction is “adding guitar”, the processing instruction is used to remix the first music data by adding a guitar track. In another example where the processing instruction is “removing bass”, the processing instruction is used to remix the first music data by deleting the existing bass track in the first music data. In a further example where the processing instruction includes “adding guitar, removing bass and processing music into fast rhythm style”, the processing instruction is provided to remix the first music data by adding a guitar track, deleting the bass track and processing the first music data as music data with fast rhythm style. In another example where the processing instruction includes “adding guitar and processing the music into sad music”, the processing instruction is used to remix the first music data by adding a guitar track and processing the first music data as the music data which delivers sad emotions when heard by the audience.
In one example, the users may upload the first music data in a terminal device, such as mobile phone, and input the processing instruction in text form in a way similar to entering chat information to obtain the first music data and the processing instruction. In another example, the users may upload the first music data in a terminal device, such as mobile phone, and input the processing instruction in audio form in a way similar to inputting chat information to obtain the first music data and the processing instruction in audio form. Then, the processing instruction in audio form is converted into the processing instruction in text form. After the second music data is obtained, the terminal device may play the second music data to implement user-interactive music data processing.
In the above step S102, the first music data and the processing instruction in text form are also input to the music processing model. The music processing model is a pre-trained model, which may process the first music model in accordance with the processing instruction in text form, to obtain the second music data matching the processing instructions. The second music data may match a processing content indicated by the processing instruction. For example, the first music data are remixed according to the processing instruction in text form in the above example by adding a guitar track and processing the first music data into the second music data which deliver sad emotions when heard by the audience.
In the above step S104, the music processing model extracts the chord progression feature of the first music data as the first chord progression feature, and extracts the audio feature of the first music data and the text feature of the processing instruction.
In the above step S106, the audio feature is processed, by the music processing model, in accordance with the first chord progression feature and the text feature, to generate second music data. Since the second music data are generated with reference to the first chord progression feature of the first music data, a similarity between the first chord progression feature of the first music data and the second chord progression feature of the second music data is greater than a similarity threshold, to ensure consistent chord progression features between the first music data and the second music data. Accordingly, the first music data and the second music data are highly harmonious and consistent in musicality. The similarity threshold may be a preset threshold. The similarity between the first chord progression feature and the second chord progression feature is obtained by comparing the chord types at different time points represented by the first chord progression feature and the chord types at different time points denoted by the second chord progression feature. For example, the first chord progression feature represents the chord types at respective time points, respectively being type 1, type 2 and type 1; the second chord progression feature denotes the chord types at respective time points, each being type 1, type 2, type 4 and type 5. It is prescribed that the similarity equals to the number of same chord types (including repeated chord type) between the first chord progression feature and the second chord progression feature dividing a maximum value of the number of chord types of the first chord progression feature and the second chord progression feature (including repeated chord type). In such case, the similarity between the first chord progression feature and the second chord progression feature is determined to be 50%.
In one embodiment, processing, by the music processing model, the audio feature in accordance with the first chord progression feature and the text feature, to generate second music data includes:
In this embodiment, the music processing model may randomly generate noise data for the first music data, and the randomly generated noise data are random noise data. The audio feature extracting unit in the music processing model also may process the random noise data to extract noise features of the random noise data. Next, feature compression processing is performed, by the music processing model, on the audio feature and the noise feature in accordance with the text feature to obtain a first feature corresponding to the first music data and the random noise data. The first feature corresponding to the first music data and the random noise data may be data in matrix form and is a more abstract and more aggregated effective feature extracted from the audio feature of the first music data and the noise feature of the random noise data. The first feature corresponding to the first music data and the random noise data represents information associated with the user-input processing instruction in the first music data and information associated with the user-input processing instruction in the random noise data. The associated information is the effective information of the first music data and the random noise data.
Next, feature weight adjustment processing and feature restoration processing are performed, by the music processing model, on the first feature corresponding to the first music data and the random noise data in accordance with the above first chord progression feature and the above text feature, to generate second music data; wherein feature values in the first feature have feature weights, the feature weights indicating importance of the feature values at feature restoration; and the feature weight adjustment processing adjusts the feature weights of feature values in the first feature.
Accordingly, this embodiment generates, by the music processing model, random noise data for the first music data, extracts the first feature corresponding to the first music data and the random noise data in accordance with the text feature of the processing instruction and performs feature weight adjustment processing and feature restoration processing on the first feature in accordance with the first chord progression feature of the first music data and the text feature of the processing instruction to generate the second music data. As the second music data are obtained from performing the feature weight adjustment processing and the feature restoration processing on the first feature based on the first chord progression feature and the text feature, the similarity between the first chord progression feature of the first music data and the second chord progression feature of the second music data is greater than the similarity threshold, so as to ensure consistency between the chord progression features of the first music data and the second music data. Therefore, the second music data match with the processing content indicated by the processing instruction of the users, to generate music data meeting the users' requirements.
In one embodiment, performing, by the music processing model, feature compression processing on the audio feature and a noise feature of random noise data in accordance with the text feature to obtain a first feature includes:
down-sampling the audio feature by the music processing model, and down-sampling the noise feature and the down-sampled audio feature in accordance with the text feature to obtain the first feature.
In this embodiment, the audio feature of the first music data is down-sampled by the music processing model in the first place. Then, the noise feature of the random noise data and the down-sampled audio feature are down-sampled again in accordance with the text feature, to obtain the first feature.
Accordingly, in this embodiment, the audio feature of the first music data is down-sampled multiple times, such that the music processing model, when generating the first feature, learns the audio feature of the first music data more intensively. Therefore, the information associated with the processing instruction in the first music data denoted by the first feature is more accurate and the second music data generated based on the first feature are more aligned with users' requirements.
According to
down-sampling the audio feature by respective first down-sampling layers; wherein an input of a first layer of first down-sampling layers includes the audio feature; an input of an n+1-th layer of first down-sampling layers includes an output of an n-th layer of first down-sampling layers; the n is greater than or equal to 1, and is smaller than or equal to an integer of T−1, and T is the number of first down-sampling layers;
Referring to
As shown in
It can be seen from
Therefore, in this embodiment, the audio feature is first down-sampled by the first down-sampling layers. Then, the noise feature of the random noise data and the down-sampled audio feature are then down-sampled again by the second down-sampling layers in accordance with the text feature, such that the music processing model, when generating the first feature, learns the audio feature of the first music data more intensively. Therefore, the information associated with the processing instruction in the first music data denoted by the first feature is more accurate.
Down-sampling mentioned in each embodiment of the present disclosure is an approach for feature compression. In actual implementation, the down-sampling also may be replaced by other feature compression means. The other feature compression means are not restricted here.
As illustrated in
According to
Moreover, as shown in
Therefore, by this embodiment, feature weights of respective feature values in the first feature are first adjusted, by the feature weight adjustment unit, in accordance with the first chord progression feature, to obtain the first feature after feature weight adjustment; next, feature restoration is performed, by the feature restoration unit, on the first feature after feature weight adjustment in accordance with the text feature, to generate second music data. By adjusting the feature weights of the feature values of the first feature, the second music data matching the processing instruction and having the second chord progression feature meeting the requirements may be obtained from the feature restoration based on the first feature after feature weight adjustment, which enhances the accuracy for generating the second music data.
According to
In this embodiment, in accordance with the first chord progression feature, the feature weights of respective feature values in the first feature may be adjusted based on cross attention mechanism by the feature weight adjustment layer, to obtain the first feature after feature weight adjustment.
It is to be noted that in the model shown by
Accordingly, by this embodiment, the feature weights of respective feature values in the first feature may be efficiently and rapidly adjusted based on the attention mechanism by the feature weight adjustment layer in accordance with the first chord progression feature, to improve the efficiency for obtaining the first feature after feature weight adjustment.
As demonstrated in
Referring to
In this embodiment, the feature output by the last layer of the up-sampling layers is also decoded to obtain the second music data.
Therefore, in this embodiment, the first feature after feature weight adjustment is up-sampled by respective up-sampling layers in accordance with the text feature; and a feature output by a last layer of up-sampling layers is decoded to obtain second music data. Since the first feature after feature weight adjustment is up-sampled in accordance with the text feature of the processing instruction, the generated second music data may match the processing instruction of the users.
According to
In this embodiment, the text feature of the processing instruction is also input to the emotion style guide module, which emotion style guide module identifies the music emotion indicated by the text feature as decoding instruction information, or identifies the music style indicated by the text feature as decoding instruction information, or identifies the music emotion and the music style indicated by the text feature as decoding instruction information.
Further, the emotion style guide module instructs the decoding unit to decode a feature output by a last layer of up-sampling layers in accordance with the decoding instruction information, to obtain second music data matching the decoding instruction information. The emotion style guide module may calculate a gap between the decoding instruction information and the decoded second music data and adjust the audio feature and the first chord progression feature of the first music data based on the gap feedback instruction, to adjust the first feature and generate the adjusted second music data. Thus, the second music data matching the decoding instruction information are finally generated.
Accordingly, in this embodiment, since the music emotion and/or music style indicated by the text feature may be identified as the decoding instruction information, and the feature output by the last layer of the up-sampling layers may be decoded according to the decoding instruction information to obtain the second music data, the second music data may match the decoding instruction information and have the music style and/or music emotion desired by the users.
In a specific example, in the pre-processing module of the music processing module, the audio feature extracting unit includes an auto encoder, which may take the input information as learning objective and conduct representative learning on the input information; or the audio feature extracting unit may be implemented by an audio feature extraction network of SoundStream model. The chord progression feature extracting unit may be implemented by existing technologies, such as those disclosed in Article-“JOINTIST: JOINT LEARNING FOR MULTI-INSTRUMENT TRANSCRIPTION AND ITS APPLICATIONS”.
In one specific example, the intermediate processing module of the music processing module may be implemented based on unet network of Diffusion Model.
In one specific example, in the post-processing module of the music processing module, the decoding unit may be implemented based on the auto decoder, or a decoding network of SoundStream model. The emotion style guide unit may be implemented on the basis of Mulan model, CLAP model and the like.
In a specific example, in case that the intermediate processing module of the music processing module is implemented based on unet network of Diffusion Model, Chunk Transformer may replace Spatial Transformer in the unet network. In
With reference to
The detailed procedure of processing the first music data based on the music processing model has been introduced above. Next, a training procedure of the music processing model is described below.
Step S302: obtaining sample music data, a sample processing instruction in text form associated with the sample music data and target music data corresponding to the sample music data;
Step S304: extracting, by a pre-built neural network structure, a sample chord progression feature and a sample audio feature of the sample music data, and a sample text feature of the sample processing instruction;
Step S306: training the neural network structure based on the sample chord progression feature, the sample audio feature, the sample text feature and the target music data, where the trained neural network structure is the music processing model
In the above step S302 of
In the above step S304, a sample chord progression feature and a sample audio feature of the sample music data, and a sample text feature of the sample processing instruction are extracted by a pre-built neural network structure.
Referring to
In the above step S306, the neural network structure is trained based on the sample chord progression feature, the sample audio feature, the sample text feature and the target music data, where the trained neural network structure is the music processing model.
Therefore, by this embodiment, the music processing model may be efficiently and rapidly trained in accordance with the sample music data, the sample processing instruction in text form associated with the sample music data and the target music data corresponding to the sample music data. Since the sample chord progression feature of the sample music data is used during the training of the music processing model, the music processing model performs well in maintaining consistency of the chord progression feature, such that the music data before processing and after processing are highly harmonious and consistent in musicality.
In one embodiment, training the neural network structure based on the sample chord progression feature, the sample audio feature, the sample text feature and the target music data includes:
As shown in
Therefore, by this embodiment, the sample random noise data can be superimposed on the target music data to obtain the target noise data; the target noise feature of the target noise data is extracted and the neural network structure is trained efficiently and rapidly based on the sample chord progression feature, the sample audio feature, the sample text feature and the target noise feature.
In one example, training the neural network structure based on the sample chord progression feature, the sample audio feature, the sample text feature and the target noise feature includes:
In this embodiment, feature compression processing is performed, by the music processing model in training, i.e., neural network structure, on the sample audio feature and the target noise feature in accordance with the sample text feature to obtain a second feature. Similar to the first feature, the second feature may be data in matrix form and is a more abstract and more aggregated effective feature extracted from the sample audio feature of the sample music data and the target noise feature of the target noise data. The second feature represents information associated with the sample processing instruction in the sample music data and information associated with the sample processing instruction in the target noise data. The associated information is the effective information of the sample music data and the target noise data.
Next, feature weight adjustment processing and feature restoration processing are performed, by the neural network structure, on the second feature in accordance with the sample chord progression feature and the sample text feature, to generate processed sample data music; and feature values in the second feature have feature weights, the feature weights indicates importance of the feature values at feature restoration; and the feature weight adjustment processing adjusts the feature weights. The neural network structure is trained based on the processed sample music data and the target music data.
Accordingly, by this embodiment, the feature weight adjustment processing and the feature restoration processing are performed on the second feature in accordance with the sample chord progression feature and the sample text feature to generate processed sample music data; and the neural network structure is trained based on the processed sample music data and the target music data. Therefore, when the well-trained model is processing the music data, the similarity between the chord progression features of the music data before and after processing is greater than the similarity threshold, to generate music data meeting the users' needs.
In one embodiment, performing, by the neural network structure, feature compression processing on the sample audio feature and the target noise feature in accordance with the sample text feature to obtain a second feature includes:
In this embodiment, the neural network structure first down-samples the sample audio feature of the sample music data and then down-samples the target noise feature of the target noise data and the down-sampled sample audio feature again in accordance with the sample text feature to obtain the second feature.
Accordingly, in this embodiment, the sample audio feature of the sample music data is down-sampled multiple times, such that the music processing model, when being trained, learns the sample audio feature of the sample music data more intensively. Therefore, the information associated with the sample processing instruction in the sample music data denoted by the second feature is more accurate and the trained music processing model may generate the music data more aligned with users' requirements.
As shown in
Referring to
As shown in
It can be seen from
Therefore, in this embodiment, the sample audio feature is first down-sampled by the first down-sampling layers. Then, the target noise feature and the down-sampled sample audio feature are then down-sampled by the second down-sampling layers in accordance with the sample text feature, such that the neural network structure learns the audio feature of the first music data more intensively. Therefore, the information associated with the sample processing instruction in the sample music data denoted by the second feature is more accurate.
According to
Referring to
Moreover, as shown in
Therefore, by this embodiment, feature weights of respective feature values in the second feature are first adjusted, by the feature weight adjustment unit, in accordance with the sample chord progression feature, to obtain the second feature after feature weight adjustment; next, feature restoration is performed, by the feature restoration unit, on the second feature after feature weight adjustment in accordance with the sample text feature, to generate processed sample music data. By adjusting the feature weight of the feature value of the second feature, the processed sample music data matching the sample processing instruction and having the chord progression feature meeting the requirements may be obtained after the feature restoration based on the second feature after feature weight adjustment, which enhances the accuracy for generating the processed sample music data and precision for model training.
According to
In this embodiment, in accordance with the sample chord progression feature, the feature weights of respective feature values in the second feature may be adjusted based on cross attention mechanism by the feature weight adjustment layer, to obtain the second feature after feature weight adjustment.
It is to be noted that in the network structure shown by
Accordingly, by this embodiment, the feature weights of respective feature values in the second feature may be efficiently and rapidly adjusted based on the attention mechanism by the feature weight adjustment layer in accordance with the sample chord progression feature, to improve the efficiency for obtaining the second feature after feature weight adjustment.
As demonstrated in
Referring to
In this embodiment, the feature output by the last layer of the up-sampling layers is also decoded to obtain the processed sample music data.
Therefore, in this embodiment, the second feature after feature weight adjustment is up-sampled, by respective up-sampling layers, in accordance with the sample text feature; and a feature output by a last layer of up-sampling layers is decoded to obtain processed sample music data. Since the second feature after feature weight adjustment is up-sampled in accordance with the sample text feature of the sample processing instruction, the generated processed sample music data may match the sample processing instruction.
In this embodiment, after the processed sample music data are obtained, the neural network structure may be trained based on differences between the processed sample music data and the target music data. Also, the sample random noise data added during the model training may be predicted in accordance with principles of the Diffusion Model, to train the model. The specific training approaches are not restricted here.
It is to be explained that the application procedure of the music processing model is similar to its training procedure. Therefore, the training procedure also may be explained with reference to the aforementioned application procedure.
In summary, the music data may be processed by the above music processing model, such that the chord progression features of the music data before and after processing are quite consistent, and the music data before and after processing are highly harmonious and consistent in musicality.
One embodiment of the present disclosure also provides an audio processing apparatus for implementing the above music processing method.
Alternatively, the generating unit 53 is specifically configured to:
Alternatively, the generating unit 53 is also specifically configured to:
Alternatively, the music processing model includes a first down-sampling unit and a second down-sampling unit; the first down-sampling unit consists of a plurality of first down-sampling layers connected sequentially; and the second down-sampling unit consists of a plurality of second down-sampling layers connected sequentially; wherein the generating unit 53 is configured to:
Alternatively, the music processing module includes a feature weight adjustment unit and a feature restoration unit; wherein the generating unit 53 is also configured to:
Alternatively, the feature weight adjustment unit includes a feature weight adjustment layer based on attention mechanism; wherein the generating unit 53 is also specifically configured to:
Alternatively, the feature restoration unit includes a plurality of up-sampling layers connected sequentially; wherein the generating unit 53 is specifically configured to:
Alternatively, the music processing model includes an emotion style guide unit and a decoding unit; wherein the generating unit 53 is specifically configured to:
Alternatively, there is included a model training unit configured to:
Alternatively, the model training unit is specifically configured to:
Alternatively, the model training unit is specifically configured to:
Alternatively, the model training unit is configured to:
Alternatively, the neural network structure includes a first down-sampling unit and a second down-sampling unit; the first down-sampling unit consists of a plurality of first down-sampling layers connected sequentially; and the second down-sampling unit consists of a plurality of second down-sampling layers connected sequentially; wherein the model training unit is specifically configured to:
Alternatively, the neural network structure includes a feature weight adjustment unit and a feature restoration unit; wherein the model training unit is specifically configured to:
Alternatively, the feature weight adjustment unit includes a feature weight adjustment layer based on attention mechanism; wherein the model training unit is specifically configured to:
Alternatively, the feature restoration unit includes a plurality of up-sampling layers connected sequentially; wherein the model training unit is also configured to:
The audio processing apparatus in the embodiment of the present disclosure may implement the respective procedure of the above audio processing method embodiment and achieve the same effects and functions, and thus will not be covered here.
One embodiment of the present disclosure also provides an electronic device.
In a specific embodiment, the electronic device includes a processor; and a memory configured to store computer-executable instructions, wherein the computer-executable instructions, when executed, cause the processor to fulfill the following procedure of:
The electronic device in the embodiments of the present disclosure may implement the respective procedure of the above audio processing method embodiment and achieve the same effects and functions, and thus will not be covered here.
A further embodiment of the present disclosure also proposes a computer-readable storage medium for storing computer-executable instructions, wherein the computer-executable instructions, when executed, cause the processor to fulfill the following procedure of:
The storage medium in the embodiment of the present disclosure may implement the respective procedure of the above audio processing method embodiment and achieve the same effects and functions, and thus will not be covered here.
In various embodiments of the present disclosure, the computer-readable storage medium includes Read-Only Memory (ROM), Random Access Memory (RAM), magnetic disc or optical disc etc.
In the 1990s, improvement of a technology can be clearly distinguished between hardware improvement (for example, improvement on a circuit structure such as a diode, a transistor, or a switch) and software improvement (improvement on a method procedure). However, with the development of technologies, improvement of many method procedures can be considered as direct improvement of a hardware circuit structure. Almost every designer programs an improved method procedure to a hardware circuit, to obtain a corresponding hardware circuit structure. Therefore, it cannot be concluded that improvement of a method procedure should not be implemented by using a hardware entity module. For example, a Programmable Logic Device (PLD) (for example, Field Programmable Gate Array (FPGA)) is such an integrated circuit, the logical function of which is determined by device programming executed by a user. The designers program by themselves to “integrate” a digital system into a single PLD without requiring a chip manufacturer to design and produce a dedicated integrated circuit chip. In addition, instead of manually fabricating an integrated circuit chip, the programming is mostly implemented by “logic compiler” software, which is similar to a software compiler used during program and development. Original codes before compiling are also written in a specific programming language, which is referred to as Hardware Description Language (HDL), and there is more than one type of HDL, such as ABEL (Advanced Boolean Expression Language), AHDL (Altera Hardware Description Language), Confluence, CUPL (Cornell University Programming Language), HDCal, JHDL (Java Hardware Description Language), Lava, a Lola, MyHDL, PALASM, and RHDL (Ruby Hardware Description Language), etc. Currently, VHDL (Very-High-Speed Integrated Circuit Hardware Description Language) and Verilog are the most commonly used. Those skilled in the art should also understand that a hardware circuit that implements the logical method procedure can be easily obtained just by logically programming the method procedure with the above several hardware description languages and then into the integrated circuit.
A controller can be implemented in any appropriate ways. For example, the controller may take the form of a microprocessor or a processor, or a computer-readable medium that stores computer readable program codes (such as software or firmware) that can be executed by the (micro) processor, a logic gate, a switch, an Application-Specific Integrated Circuit (ASIC), a programmable logic controller, or an embedded microprocessor. Examples of the controller include, but are not limited to, the following microprocessors: ARC 625D, AtmelAT91SAM, Microchip PIC18F26K20, and Silicone Labs C8051F320. The memory controller can also be implemented as a part of the control logic of the memory. Those skilled in the art also know that it is completely feasible to logically program the method steps to enable the controller to achieve the same functions in the form of logic gate, switch, application-specific integrated circuit, programmable logic controller and embedded microcontroller etc., in addition to implementing the controller by pure computer readable program codes. Therefore, the controller can be considered as a hardware component, and the apparatus for implementing various functions included therein can also be considered as a structure in the hardware component. Alternatively, the apparatus for implementing various functions can be considered as both a software module for implementing the method and a structure in the hardware component.
The system, apparatus, module, or unit described in the above embodiments can be specifically implemented by a computer chip or an entity, or a product with a certain function. A typical implementation device is a computer. To be specific, the computer for example may be a personal computer, a laptop computer, a cellular phone, a camera phone, a smartphone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, or a wearable device, or combinations thereof.
For ease of description, the apparatus is described by various units divided by functions. Certainly, during implementation of the present disclosure, the functions of the respective units can be implemented in one or more pieces of software and/or hardware.
Those skilled in the art should understand that one or more embodiments of the present disclosure can be provided as a method, a system, or a computer program product. Therefore, the one or more embodiments of the present disclosure may be in the form of embodiments of hardware only, embodiments of software only and embodiments of combination of software and hardware. In addition, the one or more embodiments of the present disclosure may take the form of a computer program product that is implemented on one or more computer-usable storage medium (including but not limited to a disk memory, a CD-ROM, and an optical memory) containing computer-usable program codes.
The present disclosure is described with reference to the flowcharts and/or block diagrams of the method, the device (system), and the computer program product according to the embodiments of the present disclosure. It should be understood that each process and/or block in the flowchart and/or the block diagram and a combination thereof can be implemented by the computer program instructions. These computer program instructions can be provided for a general-purpose computer, a dedicated computer, an embedded processor, or a processor of any other programmable data processing device to generate a machine, such that the instructions executed by a computer or a processor of other programmable data processing devices generate an apparatus for implementing the function specified in one or more flows in the flowcharts or in one or more blocks in the block diagrams.
These computer program instructions also can be stored in a computer readable memory that can instruct the computer or other programmable data processing device to work in a specific method, such that the instructions stored in the computer readable memory generate an article that includes an instruction apparatus. The instruction apparatus implements the function specified in one or more flows in the flowcharts or in one or more blocks in the block diagrams.
These computer program instructions also can be loaded to a computer or another programmable data processing device, such that a series of operation steps are performed on the computer or the other programmable device to generate computer-implemented processing. Therefore, the instructions executed on the computer or the other programmable device provide steps for implementing the function specified in one or more flows in the flowcharts or in one or more blocks in the block diagrams.
It is to be noted that the term “include”, “contain”, or any other variants thereof is intended to be a non-exclusive inclusion, such that a process, a method, a product, or a device including a list of elements not only includes those elements but also contains other elements which are not explicitly listed, or elements inherent to such process, method, product, or device. Elements defined by the expression of “including one . . . ” do not, without more constraints, exclude the presence of additional identical elements in the process, method, product, or device including the elements.
One or more embodiments of the present disclosure can be described in the general context of the computer executable instructions executed by the computer, e.g., program module. Generally, the program module includes a routine, a program, an object, an assembly, a data structure for executing a specific task or implementing a specific abstract data type. One or more embodiments of the present disclosure can also be carried out in distributed computing environments. In the distributed computing environments, tasks are performed by remote processing devices connected through a communications network. In the distributed computing environments, the program module can be located in both local and remote computer storage media including storage devices.
The embodiments in the present disclosure are all described in a progressive way. The same or similar parts among the embodiments may refer to each other. Each embodiment focuses on its difference from the others. Particularly, a system implementation is basically similar to a method implementation, and therefore is described briefly. Related parts of the system embodiment may refer to description of the method embodiment.
The previous description is merely embodiments of the present disclosure and does not restrict the present disclosure. For those skilled in the art, the present disclosure may be modified or changed in various ways. Any modifications, equivalent substitutions and improvements shall fall within the scope of the claims of the present disclosure as long as they are within the spirit and the principle of the present disclosure.
Number | Date | Country | Kind |
---|---|---|---|
202310993452.3 | Aug 2023 | CN | national |