This application is based upon and claims priority to Chinese Patent Application No. 202110970317.8, filed on Aug. 23, 2021, the entire contents of which are incorporated herein by reference.
The disclosure relates to a field of computer technologies, and particularly to a method and an apparatus for voice recognition, an electronic device, a storage medium and a computer program product.
At present, with the development of technologies such as artificial intelligence and natural language processing, voice recognition technology is widely applied in the fields of intelligent household appliances, robot voice interaction, in-vehicle (vehicle-mounted) voice and the like. For example, in an in-vehicle voice scene, an in-vehicle voice assistant may perform voice recognition on the speaking content of a driver or a passenger on a vehicle, and acquire a user intention based on the voice recognition content and automatically perform a corresponding instruction without needing manual operations of the user, with a fast response speed, and being beneficial to driving safety.
According to a first aspect of the disclosure, a method for voice recognition is provided, and includes: performing by an electronic device, voice recognition on voice information; and updating by the electronic device, a waiting duration for end-point detection (EPD) from a first preset duration to a second preset duration in response to recognizing a preset keyword from the voice information, where the first preset duration is less than the second preset duration.
According to a second aspect of the disclosure, an electronic device is provided, and includes: at least one processor; and a memory communicatively connected to the at least one processor. The memory is stored with instructions executable by the at least one processor. When the instructions are performed by the at least one processor, the at least one processor is caused to perform voice recognition on voice information; and update a waiting duration for tail point detection (EPD) from a first preset duration to a second preset duration in response to recognizing a preset keyword from the voice information, where the first preset duration is less than the second preset duration.
According to a third aspect of the disclosure, a non-transitory computer readable storage medium stored with computer instructions is provided. When the computer instructions are executed by a computer, the computer is caused to perform a method for voice recognition. The method includes: performing by an electronic device, voice recognition on voice information; and updating by the electronic device, a waiting duration for end-point detection (EPD) from a first preset duration to a second preset duration in response to recognizing a preset keyword from the voice information, where the first preset duration is less than the second preset duration.
It should be understood that, the content described in the part is not intended to identify key or important features of embodiments of the disclosure, nor intended to limit the scope of the disclosure. Other features of the disclosure will be easy to understand through the following specification.
The drawings are intended to better understand the solution, and do not constitute a limitation to the disclosure.
Embodiments of the present disclosure are described as below with reference to the accompanying drawings, which include various details of embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Therefore, those skilled in the art should realize that various changes and modifications may be made to the embodiments described herein without departing from the scope of the present disclosure. Similarly, for clarity and conciseness, descriptions of well-known functions and structures are omitted in the following descriptions.
Artificial Intelligence (AI) is a new science of technology that studies and develops theories, methods, technologies and application systems configured to simulate, extend and expand human intelligence. At present, AI technology is characterized by high automation, high accuracy and low cost, which is widely applied.
Voice Recognition is a technology that allows a machine to convert a voice signal into a corresponding text or a command through recognition and understanding process, which mainly includes feature extraction technology, pattern matching criteria and model training technology.
Natural Language Processing (NLP) is a computer system that may effectively achieve natural language communication, and especially a science of software system, is an important direction in the field of computer science and artificial intelligence.
End-point detection (EPD) is a technology for recognizing a voice end point, that is, may recognize whether a user has finished the speaking, and is an important direction of voice activity detection (VAD) technology, mainly including three aspects of audio framing, feature extraction, and classification recognition.
However, a voice recognition process in the related art is easily broken by the EPD technology, and the obtained voice recognition result is incomplete, which affects the user experience.
As illustrated in
At S101, voice recognition is performed on voice information.
It needs to be noted that, an executive subject of the method for voice recognition in the embodiment of the disclosure may be a hardware device with a capacity of processing data information and/or a software necessary to drive the operation of the hardware device. Optionally, an executive subject may include a workstation, a server, a computer, a user terminal and other smart devices. The user terminal includes but not limited to a mobile phone, a computer, a smart voice interaction device, a smart appliance, a vehicle-mounted terminal, etc.
In the embodiment of the disclosure, voice recognition may be performed on voice information in an offline and/or online mode, which is not limited here.
In an implementation, an apparatus for voice recognition may be preconfigured on the smart device to perform voice recognition on the voice information, that is, offline voice recognition may be thus achieved. The apparatus for voice recognition may include a voice recognition model.
In an implementation, the smart device may establish a network connection with a server, and send the voice information to the server, the apparatus for voice recognition in the server may perform voice recognition on the voice information, and the server may send a voice recognition result to the smart device, that is, online voice recognition may be thus achieved. The server may include a cloud server.
It should be noted that, in the embodiment of the disclosure, the method for collecting voice information is not limited here. For example, an apparatus for voice collection may be mounted on the smart device or around the smart device, and may acquire voice information. The apparatus for voice collection may include a microphone.
At S102, a waiting duration for EPD is updated from a first preset duration to a second preset duration in response to recognizing a preset keyword from the voice information, where the first preset duration is less than the second preset duration.
In the embodiment of the disclosure, EPD may be further performed on the voice information in the process of performing voice recognition on the voice information. The EPD result may be configured to determine whether to continue voice recognition on the voice information. For example, when the EPD result is that the end point is not recognized, indicating that a speaking process of a user has not ended, voice recognition on the voice information is continued. When the EPD result is that the end point is recognized, indicating that the speaking process of the user has ended, voice recognition on the voice information is stopped.
It should be noted that, in the embodiment of the disclosure, the waiting duration for EPD refers to a critical value of determining whether the end point is recognized in EPD. For example, when a mute duration in the voice information is less than the waiting duration for EPD, the mute duration is relatively short, and the speaking process of the user has not ended, the EPD result is that the end point is not recognized. On the contrary, when the mute duration reaches/is not less than the waiting duration for EPD, the mute duration is too long, and the speaking process of the user has ended, the EPD result is that the end point is recognized.
In the embodiment of the disclosure, the waiting duration for EPD may be updated from a first preset duration to a second preset duration in response to recognizing a preset keyword from the voice information, where the first preset duration is less than the second preset duration. That is, the waiting duration for EPD may be prolonged in response to recognizing the preset keyword from voice information.
The preset keyword may be configured according to actual situation, which is not limited here. For example, the preset keyword includes but not limited to “navigating to”, “calling to”, “increasing the temperature to”.
The first preset duration and the second preset duration may be set according to actual situation, which is not limited here. For example, the first preset duration and the second preset duration are respectively set to 200 ms and 5s.
For example, when the voice recognition result of voice information is “navigating to”, the EPD duration may be updated from 200 ms to 5s in response to recognizing a preset keyword from voice information.
In summary, in the method for voice recognition according to the embodiment of the disclosure, the waiting duration for EPD may be updated from the first preset duration to the second preset duration in response to recognizing the preset keyword from the voice information, where the first preset duration is less than the second preset duration. Therefore, the waiting duration for EPD may be prolonged in response to recognizing the preset keyword from voice information, so as to avoid recognizing an end point caused by a pause after the user speaks the preset keyword and avoid a break due to recognizing an end point in the voice recognition process, which is conductive to prolonging voice recognition time and improving the user experience.
As illustrated in
At S201, voice recognition is performed on voice information.
At S202, a waiting duration for EPD is updated from a first preset duration to a second preset duration in response to recognizing a preset keyword from the voice information, where the first preset duration is less than the second preset duration.
The relevant content of steps at S201-S202 may refer to the above embodiment, which is not repeated here.
At S203, an EPD result is generated by continuing EPD on the voice information based on the waiting duration for EPD.
In the embodiment of the disclosure, after the waiting duration for EPD is updated from the first preset duration to the second preset duration, the waiting duration for EPD is the second preset duration, and the EPD on the voice information may be continued based on the waiting duration for EPD to generate an EPD result.
In an implementation, EPD may be performed on the voice information based on a preset EPD strategy in the process of performing voice recognition on the voice information. The EPD strategy may include setting a value of the waiting duration for EPD. Further, continuing EPD on the voice information based on the waiting duration for EPD, may include updating the preset EPD strategy based on the waiting duration for EPD, and continuing EPD on the voice information based on the updated EPD strategy.
In an implementation, generating an EPD result by continuing EPD on the voice information based on the waiting duration for EPD, may include acquiring a mute duration in the voice information starting from an initial moment of EPD, and determining the generated EPD result is that an end point is not recognized, in response to recognizing that the mute duration is less than the waiting duration for EPD, and the voice information includes human voice, which indicates that the mute duration is relatively short, the voice information includes the human voice, the speaking process of the user has not ended. Alternatively, the EPD result is that an end point is recognized in response to the mute duration reaching the waiting duration for EPD, which indicates that the mute duration is too long, and the speaking process of the user has ended. Therefore, the method may comprehensively consider a size relationship between the mute duration and the waiting duration for EPD and whether the voice information includes human voice, to generate an EPD result.
It may be understood that, the initial moment of performing EPD refers to a start moment of performing EPD on voice information. The mute duration in the voice information may be acquired starting from the initial moment of performing EPD. The initial moment of performing EPD may be configured according to actual situations, which is not limited here. For example, the initial moment of performing voice recognition may be determined as the initial moment of performing EPD, for example, the initial moment of performing voice recognition is 10:10:10, and the initial moment of performing EPD may be also set to 10:10:10.
In an implementation, the initial moment of performing EPD may be updated based on a moment of recognizing the preset keyword from the voice information. In this way, the initial moment of performing EPD may be updated in real time based on the moment of recognizing the preset keyword from the voice information, so as to make the initial moment of performing EPD more flexible.
Optionally, a moment of recognizing a last recognition unit of the preset keyword from the voice information may be determined as the initial moment of performing EPD. The recognition unit may be configured according to actual situations, which is not limited here, for example, the recognition unit includes but not limited to a word, a character.
For example, when the voice recognition result of the voice information is “navigating to”, and the recognition unit is a character, the recognition moment of “to” may be determined as the initial moment of performing EPD. For example, when the original initial moment of performing EPD is 10:10:10, and the recognition moment of “to” is 10:10:20, the initial moment of performing EPD may be updated from 10:10:10 to 10:10:20.
Further, the mute duration in the voice information may be acquired starting timing from the initial moment 10:10:20 of performing EPD. It may be understood that, when the user speaks “navigating to”, there may be a pause due to thinking, environmental interference and other factors, in this case, a mute portion may appear in the voice information, and a mute duration refers to a duration of the mute portion. For example, when the user speaks “navigating to a hotel”, there may be a pause for 2s between “to” and “a hotel”, the recognition moment of “to” (that is, the initial moment of EPD) is 10:10:20, and the recognition moment of “a hotel” is 10:10:22, in this case, the acquired mute duration in the voice information is 2s starting from the initial moment 10:10:20 of performing EPD to the moment 10:10:22 of recognizing the human voice (that is, the recognition moment of “a hotel”). In the related art, the waiting duration for EPD is relatively short (generally 200 ms), in this case, when the mute duration reaches 200 ms, an EPD result may be generated, and the EPD result is that an end point is recognized.
In the embodiment of the disclosure, the waiting duration for EPD may be prolonged, for example, the waiting duration for EPD is prolonged from 200 ms to 5s, and the generated EPD result is that an end point is not recognized in response to recognizing that the mute duration is less than the waiting duration for EPD, and the voice information includes human voice, thus avoiding recognizing an end point caused by a pause after the user speaks a preset keyword.
In summary, in the method for voice recognition according to the embodiment of the disclosure, after the waiting duration for EPD is updated from the first preset duration to the second preset duration, EPD on the voice information may be continued based on the waiting duration for EPD to generate the EPD result. Therefore, the waiting duration for EPD may be prolonged in response to recognizing the preset keyword from voice information, and EPD on the voice information may be continued based on the waiting duration for EPD, to generate a more humanized EPD result, thus avoiding recognizing an end point caused by a pause after the user speaks the preset keyword.
As illustrated in
At S301, voice recognition is performed on voice information.
At S302, a waiting duration for EPD is updated from a first preset duration to a second preset duration in response to recognizing a preset keyword from the voice information, where the first preset duration is less than the second preset duration.
At S303, an EPD result is generated by continuing EPD on the voice information based on the waiting duration for EPD.
The relevant content of steps at S301-S303 may refer to the above embodiment, which is not repeated here.
At S304, voice recognition on the voice information is controlled to be continued in response to the EPD result being that the end point is not recognized.
At S305, voice recognition on the voice information is controlled to be stopped in response to the EPD result being that the end point is recognized.
In the embodiment of the disclosure, after the EPD result is generated, it may be controlled whether to continue voice recognition on the voice information based on the EPD result.
In an implementation, it may be controlled that voice recognition on the voice information may be continued in response to the EPD result being that the end point is not recognized, indicating that the speaking process of the user has not ended. It is controlled that the voice recognition on the voice information is stopped in response to the EPD result being that the end point is recognized, indicating that the speaking process of the user has ended.
In an implementation, controlling to continue voice recognition on the voice information, may include controlling to continue sending voice information acquired by an apparatus for voice collection to an apparatus for voice recognition, and controlling the apparatus for voice recognition to continue the voice recognition on the voice information.
In an implementation, controlling to stop performing voice recognition on the voice information, may include controlling to stop sending voice information acquired by an apparatus for voice collection to an apparatus for voice recognition, and controlling the apparatus for voice collection to stop the voice recognition on voice information. Therefore, sending the acquired voice information to the apparatus for voice recognition may be stopped in response to recognizing an end point, which may save a transmission bandwidth.
In an implementation, after the EPD result is generated, the waiting duration for EPD may be updated from the second preset duration to the first preset duration further in response to the EPD result being that the end point is recognized, that is, the waiting duration for EPD may be shortened in response to the EPD result being that the end point is not recognized, which is beneficial to improving the sensitivity of EPD, and the EPD on the voice information may be continued based on the waiting duration for EPD to generate the EPD result.
For example, when the user wants to speak “navigating to a hotel”, and the voice recognition result of voice information is “navigating to”, the waiting duration for EPD may be updated from 200 ms to 5s in response to recognizing a preset keyword from voice information. When the EPD on voice information is continued based on the waiting duration 5s for EPD, the generated EPD result is that an end point is not recognized, indicating that the mute duration for voice information is less than 5s, and the voice information includes human voice, then the waiting duration for EPD is updated from 5s to 200 ms, and the EPD may be continued on voice information based on the waiting duration 200 ms of EPD.
In summary, according to the method for voice recognition in the embodiment of the disclosure, after the EPD result is generated, it may be controlled that voice recognition on the voice information is continued in response to the EPD result being that the end point is not recognized. Alternatively, it may be controlled that voice recognition on the voice information is stopped in response to the EPD result being that the end point is recognized, which is beneficial to saving computing resources.
As illustrated in
The recognition module 401 is configured to perform voice recognition on voice information; and the update module 402 is configured to update a waiting duration for end-point detection (EPD) from a first preset duration to a second preset duration in response to recognizing a preset keyword from the voice information, where the first preset duration is less than the second preset duration.
In an embodiment of the disclosure, the apparatus 400 for voice recognition further includes a detection module configured to generate an EPD result by continuing EPD on the voice information based on the waiting duration for EPD.
In an embodiment of the disclosure, the detection module is further configured to: acquire a mute duration in the voice information starting from an initial moment of performing EPD; and determine the generated EPD result is that an end point is not recognized, in response to recognizing that the mute duration is less than the waiting duration for EPD, and the voice information comprises human voice; or, determine the generated EPD result is that an end point is recognized, in response to recognizing that the mute duration reaches the waiting duration for EPD.
In an embodiment of the disclosure, the update module 402 is further configured to: update the initial moment of performing EPD based on the moment of recognizing the preset keyword from the voice information.
In an embodiment of the disclosure, the update module 402 is further configured to: determine a moment of recognizing a last recognition unit of the preset keyword from the voice information as the initial moment of performing EPD.
In an embodiment of the disclosure, the apparatus 400 for voice recognition further includes a control module. The control module is configured to: control to continue voice recognition on the voice information in response to the EPD result being that the end point is not recognized; or, control to stop voice recognition on the voice information in response to the EPD result being that the end point is recognized.
In an embodiment of the disclosure, the update module 402 is further configured to: update a waiting duration for EPD from the second preset duration to the first preset duration in response to the EPD result being that the end point is recognized.
In summary, in the apparatus for voice recognition according to the embodiment of the disclosure, the waiting duration for EPD is updated from the first preset duration to the second preset duration in response to recognizing the preset keyword from the voice information, where the first preset duration is less than the second preset duration. Therefore, the waiting duration for EPD may be prolonged in response to recognizing the preset keyword from voice information, so as to avoid recognizing an end point caused by a pause after the user speaks the preset keyword and avoid a break due to recognizing an end point in the voice recognition process, which is conductive to prolonging voice recognition time and improving the user experience.
Collection, storage, use, processing, transmission, provision and disclosure of the user personal information involved in the technical solution of the disclosure comply with relevant laws and regulations, and do not violate public order and good customs.
According to the embodiment of the disclosure, an electronic device, a readable storage medium and a computer program product are further provided in the disclosure.
As illustrated in
Several components in the device 500 are connected to the I/O interface 505, and include: an input unit 506, for example, a keyboard, a mouse, etc.; an output unit 507, for example, various types of displays, speakers, etc.; a storage unit 508, for example, a magnetic disk, an optical disk, etc.; and a communication unit 509, for example, a network card, a modem, a wireless communication transceiver, etc. The communication unit 509 allows the device 500 to exchange information/data with other devices over a computer network such as the Internet and/or various telecommunication networks.
The computing unit 501 may be various general and/or dedicated processing components with processing and computing ability. Some examples of a computing unit 501 include but not limited to a central processing unit (CPU), a graphics processing unit (GPU), various dedicated artificial intelligence (AI) computing chips, various computing units running a machine learning model algorithm, a digital signal processor (DSP), and any appropriate processor, controller, microcontroller, etc. The computing unit 501 performs various methods and processes as described above, for example, a method for voice recognition. For example, in some embodiments, a method for voice recognition may be further achieved as a computer software program, which is physically contained in a machine readable medium, such as the storage unit 508. In some embodiments, a part or all of the computer program may be loaded and/or installed on the device 500 through the ROM 502 and/or the communication unit 509. When the computer program is loaded on the RAM 503 and performed by the computing unit 501, one or more blocks in the above method for voice recognition may be performed. Alternatively, in other embodiments, the computing unit 501 may be configured to perform a method for voice recognition in other appropriate ways (for example, by virtue of a firmware).
Various implementation modes of the systems and technologies described above may be achieved in a digital electronic circuit system, a field programmable gate array (FPGA), an application-specific integrated circuit (ASIC), an application specific standard product (ASSP), a system-on-chip (SOC) system, a complex programmable logic device, a computer hardware, a firmware, a software, and/or combinations thereof. The various implementation modes may include: being implemented in one or more computer programs, and the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, and the programmable processor may be a dedicated or a general-purpose programmable processor that may receive data and instructions from a storage system, at least one input apparatus, and at least one output apparatus, and transmit the data and instructions to the storage system, the at least one input apparatus, and the at least one output apparatus.
The computer code(s) configured to execute a method in the present disclosure may be written with one or any combination of a plurality of programming languages. The programming languages may be provided to a processor or a controller of a general purpose computer, a dedicated computer, or other apparatuses for programmable data processing so that the function/operation specified in the flowchart and/or block diagram may be performed when the program code is executed by the processor or controller. A computer code may be performed completely or partly on the machine, performed partly on the machine as an independent software package and performed partly or completely on the remote machine or server.
In the context of the disclosure, a machine-readable medium may be a tangible medium that may contain or store a program intended for use in or in conjunction with an instruction execution system, apparatus, or device. The machine readable medium may be a machine readable signal medium or a machine readable storage medium. A machine readable storage medium may include but not limited to an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or any appropriate combination thereof. A more specific example of a machine readable storage medium includes an electronic connector with one or more cables, a portable computer disk, a hardware, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (an EPROM or a flash memory), an optical fiber device, and a portable optical disk read-only memory (CDROM), an optical storage device, a magnetic storage device, or any appropriate combination of the above.
In order to provide interaction with the user, the systems and technologies described here may be implemented on a computer, and the computer has: a display apparatus for displaying information to the user (for example, a CRT (cathode ray tube) or a LCD (liquid crystal display) monitor); and a keyboard and a pointing apparatus (for example, a mouse or a trackball) through which the user may provide input to the computer. Other types of apparatuses may further be configured to provide interaction with the user; for example, the feedback provided to the user may be any form of sensory feedback (for example, visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form (including an acoustic input, a speech input, or a tactile input).
The systems and technologies described herein may be implemented in a computing system including back-end components (for example, as a data server), or a computing system including middleware components (for example, an application server), or a computing system including front-end components (for example, a user computer with a graphical user interface or a web browser through which the user may interact with the implementation mode of the system and technology described herein), or a computing system including any combination of such back-end components, middleware components or front-end components. The system components may be connected to each other through any form or medium of digital data communication (for example, a communication network). Examples of communication networks include: a local area network (LAN), a wide area network (WAN), a blockchain network, and the Internet.
The computer system may include a client and a server. The client and server are generally far away from each other and generally interact with each other through a communication network. The relationship between the client and the server is generated by computer programs running on the corresponding computer and having a client-server relationship with each other. The server may be a cloud server, and further may be a server of a distributed system, or a server in combination with a blockchain.
According to an embodiment of the disclosure, a computer program product including a computer program is further provided in the disclosure, the computer program is configured to perform the steps of the method for voice recognition as described in the above embodiment when performed by a processor.
It should be understood that, various forms of procedures shown above may be configured to reorder, add or delete blocks. For example, blocks described in the disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired result of the technical solution disclosed in the present disclosure may be achieved, which will not be limited herein.
The above specific implementations do not constitute a limitation on the protection scope of the disclosure. Those skilled in the art should understand that various modifications, combinations, sub-combinations and substitutions may be made according to design requirements and other factors. Any modification, equivalent replacement, improvement, etc., made within the principle of embodiments of the present disclosure shall be included within the protection scope of the present disclosure.
Number | Date | Country | Kind |
---|---|---|---|
202110970317.8 | Aug 2021 | CN | national |