This application claims priority to Chinese Patent Application No. 201811291610.6, filed with the China National Intellectual Property Administration on Oct. 30, 2018 and entitled “AUDIO CONTROL METHOD AND ELECTRONIC DEVICE”, which is incorporated herein by reference in its entirety.
This application relates to the Held of terminal technologies, and in particular, to an audio control method and an electronic device.
Currently, when a screen of an electronic device (such as a mobile phone or a tablet computer) is locked, if a user needs to perform an operation on the electronic device, the user needs to first unlock the electronic device, enter a home screen, and then perform a corresponding operation.
For example, if the user needs to make a mobile phone payment, in a case of WeChat payment, when the screen of the mobile phone is locked or off, the user needs to first unlock the mobile phone and enter the home screen. Then, the user finds a WeChat icon on the home screen, and taps the WeChat icon, so that the mobile phone displays a WeChat user interface. The user further needs to perform an operation on a corresponding virtual button in the WeChat user interface, to enable the mobile phone to display a QR code interface of WeChat Money, so that the user can make a payment to a merchant.
It can be learned from the foregoing description that in the foregoing conventional manner, when the user uses a function of the mobile phone, for example, the WeChat payment, two or more operation steps are required. This is relatively cumbersome, and user experience is poor.
This application provides an audio control method and an electronic device, to help reduce operation steps performed when a user uses the electronic device, and to improve user experience to some extent.
According to a first aspect, an embodiment of this application provides an audio control method, where the method includes:
In this embodiment of this application, the user may control, through audio, the electronic device to perform an operation. This helps reduce operation steps performed when the user uses the electronic device. In addition, because the electronic device obtains different scores of audio signals through voiceprint recognition, flexible and different operations can be implemented. This helps reduce a possibility that the electronic device rejects a user request due to incorrect determining during audio control, implements controlling of the electronic device through the audio, and increases a user's trust in a controlling function of the electronic device through the audio. In addition, when the score of the audio signal is slightly low, the security authentication needs to be performed on the electronic device. This also helps improve security of the audio control, and therefore improves user experience.
In a possible design, the prompting, by the electronic device, the user to perform security authentication in a manner other than a voice manner includes:
In a possible design, the method further includes: when the score of the first audio signal is less than or equal to the second threshold, skipping, by the electronic device, unlocking the electronic device, and skipping performing the first operation, this helps improve security.
In a possible design, the method further includes: when the score of the first audio signal is less than or equal to the second threshold, sending, by the electronic device, first voice prompt information, where the first voice prompt information is used to prompt the user that the recognition of the first audio signal fails; and/or displaying first prompt information in the lock screen interface, where the first prompt information is used to prompt the user that the recognition of the first audio signal fails. According to the foregoing technical solution, the user can learn of a reason why the electronic device skips unlocking the electronic device and skips performing the first operation. This helps improve interaction between the electronic device and the user, and further improves user experience to some extent.
In a possible design, the electronic device may perform voiceprint recognition on the first audio signal in the following manner, to determine the score of the first audio signal; determining, by the electronic device from at least one pre-con figured user keyword voiceprint model, a user keyword voiceprint model corresponding to a keyword included in the first audio signal; and extracting a voiceprint feature of the first audio signal, and matching the extracted voiceprint feature with the determined user keyword voiceprint model corresponding to the keyword of the first audio signal, to determine the score of the first audio signal. The foregoing technical solution helps improve reliability of the score that is of the first audio signal and that is determined by the electronic device.
In a possible design, the electronic device may pre-configure the user keyword voiceprint model in the following manner:
In a possible design, the method further includes: when identifying that the keyword included in the second audio signal is inconsistent with the keyword prompted by the electronic device, prompting, by the electronic device, the user that the keyword is incorrect. This helps improve interaction between the user and the electronic device, and improves user experience.
In a possible design, the voiceprint feature of the first audio signal includes at least one of a mel-frequency cepstral coefficient MFCC, perceptual linear prediction PLP, and linear predictive coding LPC.
In a possible design, the receiving, by an electronic device, a first audio signal includes: receiving, by the electronic device, the first audio signal collected by a headset connected to the electronic device. According to the foregoing technical solution, the first audio signal is slightly affected by environmental noise, so that the electronic device can identify the first audio signal, and control the electronic device.
In a possible design, when the headset is a bone conduction headset, the first audio signal further includes a bone conduction signal, and the bone conduction signal is a voice generated by vibration of an ear bone when the user makes a voice. This helps improve security authentication.
According to a second aspect, an embodiment of this application provides an electronic device. The electronic device includes one or more processors; a memory; a plurality of application programs; and one or more computer programs, where the one or more computer programs are stored in the memory, the one or more computer programs include an instruction, and when the instruction is executed by the electronic device, the electronic device is enabled to perform the following steps:
In a possible design, the user may be prompted, in the following manner, to perform security authentication in a manner other than a voice manner;
In a possible design, the instruction further includes: an instruction used to: when the score of the first audio signal is less than or equal to the second threshold, skip unlocking the electronic device and skip performing the first operation.
In a possible design, the instruction further includes: an instruction used to send first voice prompt information when the score of the first audio signal is less than or equal to the second threshold, where the first voice prompt information is used to prompt the user that the recognition of the first audio signal fails; and/or an instruction used to display first prompt information in the kick screen interface when the score of the first audio signal is less than or equal to the second threshold, where the first prompt information is used to prompt the user that recognition of the first audio signal fails.
In a possible design, voiceprint recognition may be performed on the first audio signal in the following manner, to determine the score of the first audio signal:
In a possible design, the instruction further includes: an instruction used to receive a second audio signal of the user, where the second audio signal includes a second voice signal, and the second voice signal is spoken by the user based on a keyword prompted by the electronic device; an instruction used to; when identifying that a keyword included in the second audio signal is consistent with the keyword prompted by the electronic device, extract a voiceprint feature of the second audio signal; and art instruction used to configure, based on the voiceprint feature of the second audio signal and a pre-stored background model, a user keyword voiceprint model corresponding to the keyword prompted by the electronic device.
In a possible design, the instruction further includes an instruction used to: when identifying that the keyword included in the second audio signal is inconsistent with the keyword prompted by the electronic device, prompt the user that the keyword is incorrect.
In a possible design, the voiceprint feature of the first audio signal includes at least one of a mel-frequency cepstral coefficient MFCC, perceptual linear prediction PLP, and linear predictive coding LPC.
In a possible design, the first audio signal is collected and reported to the electronic device by a headset connected to the electronic device.
In a possible design, when the headset is a bone conduction headset, the first audio signal further includes a bone conduction signal, and the bone conduction signal is a voice generated by vibration of an car bone w hen the user makes a voice.
According to a third aspect, an embodiment of this application provides a chip. The chip is coupled to a memory in an electronic device. Therefore, when the chip runs, a computer program stored in the memory is invoked to implement the method according to the first aspect and any possible design of the first aspect in the embodiments of this application.
According to a fourth aspect, an embodiment of this application provides a computer storage medium, the computer storage medium stores a computer program, and when the computer program is run on an electronic device, the electronic device is enabled to perform the method in any one of the first aspect or the possible designs of the first aspect.
According to a fifth aspect, an embodiment of this application provides a computer program product. When the computer program product runs on an electronic device, the electronic device is enabled to perform the method in any one of the first aspect or the possible designs of the first aspect.
In addition, for technical effects brought by any possible design manner in the second to the fifth aspects, refer to technical effects brought by different design manners in the first aspect, and details are not described herein again.
It should be understood that the embodiments of this application may be applied to an electronic device. For example, the electronic device may be a portable electronic device, such as a mobile phone, a tablet computer, a wearable device (such as a smart watch) having a wireless communication function, or a vehicle-mounted device. The portable electronic device includes but is not limited to a portable electronic device that carries an IOS®, Android®, Microsoft®, or another operating system. Alternatively, the portable electronic device may be, for example, a laptop (Laptop) having a touch-sensitive surface (for example, a touch panel). It should be further understood that, in some other embodiments of this application, the electronic device 100 may alternatively be a desktop computer having a touch-sensitive surface (for example, a touch panel).
For example,
The processor 110 may include one or more processing units. For example, the processor 110 may include an application processor (application processor AP), a modem processor, a graphics processing unit (graphics processing unit GPU), an image signal processor (image signal processor, ISP), a controller, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neutral-processing unit (neural-network processing unit, NPU). Different processing units may be separate devices, or may be integrated into one or more processors.
In some embodiments, a memory may be further disposed in the processor 110, and is configured to store an instruction and data For example, the memory in the processor 110 may lie a cache memory. The memory may be configured to store an instruction or data that is just used or cyclically used by the processor 110. If the processor 110 needs to use the instruction or the data again, the processor 110 may directly invoke the instruction or the data from the memory. This helps avoid repeated access, reduces a waiting time period of die processor 110, and improves system efficiency.
The internal memory 121 may be configured to store computer-executable program code. The executable program code includes an instruction. The processor 110 runs the instruction stored in the internal memory 121, to implement various function applications and data processing of the electronic device 100. The internal memory 121 may include a program storage area and a data storage area. The program storage area may store an operating system, an application required by at least one function (for example, a sound play function and an image play function), and the like. The data storage region may store data (such as audio data and an address book) created during use of the electronic device 100, and the like. In addition, the infernal memory 121 may include a high-speed random access memory, or may include a nonvolatile memory, for example, at least one magnetic disk storage device, a flash memory device, or a universal flash storage (universal flash storage, UPS).
The external memory interface 122 may be configured to connect to an external storage card (such as a micro SD card) to extend a storage capability of the electronic device 100. The external memory card communicates with the processor 110 through the external memory interface 122, to implement a data storage function, for example, to store files such as music and a video in the external memory card.
The antenna 1 and the antenna 2 are configured to: transmit and receive electromagnetic wave signals. Each antenna in the electronic device 100 may be configured lo cover a single communications frequency band or a plurality of communications frequency bands. Different antennas may be further multiplexed to improve antenna utilization. For example, the antenna 1 may be multiplexed as a diversity antenna in a wireless local area network. In some other embodiments, an antenna may be used in combination with a tuning switch.
The mobile communications module 131 may provide a solution that is for wireless communication including 2G/3G/4G/5G and the like and that is applied to the electronic device 100. The mobile communications module 131 may include at least one filter, a switch, a power amplifier, a low noise amplifier (low noise amplifier, LNA), and the like. The mobile communications module 131 may receive an electromagnetic wave signal through the antenna 1, perform processing such as filtering or amplification on the received electromagnetic wave signal, and transmit the electromagnetic wave signal to the modem processor for demodulation. The mobile communications module 131 may further amplify a signal modulated by the modem processor, and convert the signal to an electromagnetic wave signal through the antenna 1 for radiation. In some embodiments, at least some function modules in the mobile communications module 131 may be disposed in the processor 110. In some embodiments, at least some function modules in the mobile communications module 131 may be disposed in a same device as at least some modules in the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is configured to modulate a to-be-sent low-frequency baseband signal into a medium or high-frequency signal. The demodulator is configured to demodulate a received electromagnetic wave signal into a low-frequency baseband signal. Then, the demodulator transmits the low-frequency baseband signal obtained through demodulation to the baseband processor for processing, the low-frequency baseband signal is processed by the baseband processor, and then transmitted to the application processor. The application processor outputs a sound signal through an audio device (not limited to the speaker 140A, the receiver 140B, or the like), or displays an image or a video through the display screen 151. In some embodiments, the modem processor may be an independent device. In some other embodiments, the modem processor may be independent of the processor 110, and is disposed in a same device as the mobile communications module 131 or another function module.
The wireless communications module 132 may provide a solution, applied to the electronic device 100, to wire less communication including a wireless local area network (wireless local area networks, WLAN) (for example, a Wi-Fi network), Bluetooth (bluetooth, BT), a global navigational satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field communication (near field communication, NFC), an infrared (infrared IR) technology, or the like. The wireless communications module 132 may be one or more devices integrating at least one communications processing module. The wireless communications module 132 receives an electromagnetic wave signal through the antenna 2, performs frequency modulation and filtering processing on the electromagnetic wave signal, and sends a processed signal to the processor 110. The wireless communications module 132 may further receive a to-be-sent signal from the processor 110, perform frequency modulation and amplification on the signal, and convert a processed signal into an electromagnetic wave signal through the antenna 2 for radiation.
In some embodiments, the antenna 1 and the mobile communications module 131 of the electronic device 100 are coupled, and the antenna 2 and the wireless communications module 132 of the electronic device 100 are coupled, so that the electronic device 100 can communicate with a network and another device by using a wireless communications technology. The wireless communications technology may include a global system for mobile communications (global system for mobile communications, GSM), a general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), wideband code division multiple access (wideband code division multiple access, WCDMA), time division-code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, the GNSS, the WLAN, the NFC, the FM, the IR technology, and/or the like. The GNSS may include a global positioning system (global positioning system. GPS), a global navigation satellite system (global navigation satellite system, GLONASS), a beidou navigation satellite system (beidou navigation satellite system, BDS), a quasi-zenith satellite system (quasi-zenith satellite system. QZSS), and/or a satellite based augmentation system (satellite based augmentation systems, SBAS).
The electronic device 100 may implement audio functions such as music play and recording through the audio module 140, the speaker 140A, the receiver 140B, the microphone 140C, the headset jack 140D, the application processor, and the like.
The audio module 140 may be configured to convert digital audio information into an analog audio signal for output, and is also configured to convert an analog audio input into a digital audio signal. The audio module 140 may be further configured to perform audio signal encoding and decoding. In some embodiments, the audio module 140 may be disposed in the processor 110. or some function modules in the audio module 140 are disposed in the processor 110.
The speaker 140A, also referred to as a “horn”, is configured to: convert an audio electrical signal into a sound signal, and play the signal. For example, the electronic device 100 may play music by using the loudspeaker 140A. For another example, when the electronic device 100 enables hands-free calling, a user's voice received by the mobile communications module 131 or the wireless communications module 132 may be played by using the loudspeaker 140A.
The receiver 140B, also referred to as an “earpiece”, is configured to convert an audio electrical signal into a voice. For example, the user may listen to a voice by moving the receiver 140B close to a human ear.
The microphone 140C, also referred to as a “microphone”, is configured to convert a collected user's voice into an electrical signal. When making a call or sending a voice, the user may make a voice by using a mouth close to the microphone 140C. The microphone 140C may be configured to collect the user's voice, and then convert the user's voice into an electrical signal. At least one microphone 140C may be disposed in the electronic device 100. In some other embodiments, two microphones 140C may be disposed in the electronic device 100, and in addition to collecting the user's voice, a noise reduction function may be further implemented. In some other embodiments, three, four, or more microphones 140C may be further disposed on the electronic device 100, to implement voice collection and noise reduction, identity a voice source, implement a directional recording function, and the like.
The headset jack 140D is configured to connect to a wired headset. The headset jack 140D may be a USB interface 130, or may be a 3.5 mm open mobile terminal platform (open mobile terminal platform, OMTP) standard interface or cellular telecommunications industry association of the USA (cellular telecommunications industry association of the USA, CTIA) standard interface. In addition, the electronic device 100 may be further connected to the headset in a wireless manner such as Bluetooth. It should be noted that, in this embodiment of this application, the headset connected to the electronic device 100 may be a bone conduction headset, or may be a headset of another type. For example, the headset of the another type may include another vibration sensing sensor (such as an optical sensor or an acceleration sensor) different from the bone conduction sensor. The bone conduction headset may collect the user's voice by using the microphone, and collect a bone conduction signal by using the bone conduction sensor. The bone conduction signal is a voice generated by vibration of an ear bone of the human ear when the user makes a voice. The electronic device 100 may determine, based on the bone conduction signal, that the voice collected by the headset by using the microphone is a voice made by a living body (for example, a person), The electronic device 100 enables, through the bone conduction signal, the electronic device 100 to determine that the collected user's voice is a voice made by the user, instead of a recorded voice made by the user. This helps prevent another person having a purpose from performing an operation on the electronic device 100 by using the recorded user's voice, and reduces a possibility of a misoperation on the electronic device 100.
The electronic device 100 can implement a display function by using the GPU, the display screen 151, the application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display screen 151 and the application processor. The GPU is configured to perform mathematical and geometric calculation, and is used for graphics rendering. The processor 110 may include one or more GPUs that execute a program instruction to generate or change display information.
The display screen 151 may be configured to display an image, a video, and the like. The display screen 151 may include a display panel. The display panel may be a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (organic light-emitting diode, OLED), an active-matrix organic light emitting diode (active-matrix organic light emitting diode, AMOLED), a flexible light-emitting diode (Hex light-emitting diode, FLED), Miniled, MicroLed, Micro-oLed, a quantum dot light emitting diode (quantum dot light emitting diodes, QLED), or the like. In some embodiments, the electronic device 100 may include one or N display screens 151, where N is a positive integer greater than 1.
The electronic device may implement a photographing function through the ISP, the camera 153, the video codec, the GPU, the display screen 151, the application processor, and the like.
The camera 153 may be configured to capture a static image or a video. Generally, the camera 153 includes a lens and an image sensor. The camera 153 projects an optical image collected by the lens onto the image sensor for imaging. The image sensor may be a charge coupled device (charge coupled device, CCD) or a complementary metal-oxide-semiconductor (complementary metal-oxide-semiconductor, CMOS) photoelectric transistor. The image sensor converts the optical image into an electrical signal, and then transmits the electrical signal to the ISP to convert the electrical signal into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into a standard image signal in an RGB format, a YUV format, or the like. In some embodiments, the electronic device 100 may include one or N cameras 153, where N is a positive integer greater than 1.
The ISP may further perform algorithm optimization on noise, luminance, and complexion of the image. The ISP may further optimize parameters such as exposure and a color temperature of a photographing scenario. In some embodiments, the ISP may be disposed in the camera 153.
The button 154 may include a power button, a volume button, and the like. The button 154 may be a mechanical button, or may be a touch button. The electronic device 100 may receive a button input, and generate a key signal input related to a user setting and function control of the electronic device 100.
The sensor module 160 may include one or more sensors, for example, a touch sensor 160A, a fingerprint sensor 160B, a gyroscope sensor 160C, a pressure sensor I60D, and an acceleration sensor 160E.
The touch sensor 160A may also be referred to as a “touch panel”. The touch sensor 160A may be disposed in the display screen 151. The touch sensor 160A and the display screen 151 constitute a touchscreen that is also referred to as a “touch control screen”. The touch sensor 160A is configured to detect a touch operation performed on or near the touch sensor 160A. The touch sensor 160A may transmit the detected touch operation to the application processor, to determine a type of a touch event. Visual output related to the touch operation may be provided by using the display screen 151. In some other embodiments, the touch sensor 160A may also be disposed on a surface of the electronic device 100 at a position different from a position of the display screen 151.
The fingerprint sensor 160 may be configured to collect a fingerprint. The electronic device 100 may use a feature of the collected fingerprint to implement fingerprint unlocking, accessing an application lock, fingerprint photographing, fingerprint call answering, and the like.
The gyroscope sensor 160C may be configured to determine a moving posture of the electronic device 100. In some embodiments, angular velocities of the electronic device 100 around three axes (to be specific, an x-axis, a y-axis, and a z-axis; may be determined by using the gyroscope sensor 160C. The gyroscope sensor 160C may be used for image stabilization during photographing. For example, when the shutter is pressed, the gyroscope sensor 160C detects an angle at which the electronic device 100 shakes, and calculates, based on the angle, a distance that needs to be compensated for a lens module, so that the lens cancels the shake of the electronic device 100 through reverse motion, to implement image stabilization. The gyroscope sensor 160C may be further used in a navigation scenario and a somatic game scenario.
The pressure sensor 160C is configured to sense a pressure signal, and can convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 160D may be disposed in the display screen 151. There are many types of pressure sensors 160D, for example, a resistive pressure sensor, an inductive pressure sensor, and a capacitive pressure sensor. The capacitive pressure sensor may include at least two parallel plates that have conductive materials. When a force acts on the pressure sensor 180A, capacitance between electrodes changes. The electronic device 100 determines pressure strength based on the change of the capacitor. When a touch operation acts on the display screen 194, the electronic device 100 detects strength of the touch operation by using the pressure sensor 180A. The electronic device 100 may also calculate a touch position based on a detection signal of the pressure sensor 180A. In some embodiments, touch operations that are performed at a same touch position but have different touch operation intensity may correspond to different operation instructions. For example, when a touch operation whose touch operation intensity is less than a first pressure threshold is performed on an SMS message icon, an instruction for viewing an SMS message is executed. When a touch operation w hose touch operation intensity is greater than or equal to the first pressure threshold is performed on the SMS message icon, an instruction for creating a new SMS message is executed.
The acceleration sensor 160b may detect magnitude of accelerations in various directions (usually on three axes) of the electronic device 100. When the electronic device 100 is static, a magnitude and a direction of gravity may be detected. The acceleration sensor 160b may be further configured to identify a posture of the electronic device, and is applied to applications such as landscape/portrait orientation switching and a pedometer.
In some embodiments, the sensor module 160 may further include an ambient optical sensor, a range sensor, an optical proximity sensor, a bone conduction sensor, a heart rate sensor, and the like. For example, the bone conduction sensor may obtain a vibration signal of a vibration bone of a human vocal-cord part. For another example, the bone conduction sensor may also contact a body pulse to receive a blood pressure beating signal. In some embodiments, the bone conduction sensor may also be disposed in the headset, to obtain a bone conduction headset. The audio module 140 may obtain a speech signal through parsing based on the vibration signal that is of the vibration bone of the vocal-cord part and that is obtained by the bone conduction sensor, to implement a speech function. The application processor may parse heart rate information based on the blood pressure beating signal obtained by the bone conduction sensor, to implement a heart rate detection function.
In some other embodiments, the processor 110 may alternatively include one or more interfaces. For example, the interface may be a SIM card interface 152. For another example, the interface may be a USB interface 170. For still another example, the interface may be an inter-integrated circuit (inter-integrated circuit, 12C) interface, an inter-integrated circuit sound (inter-integrated circuit sound, 12S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver/transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (general-purpose input/output, GPIO) interlace, and/or the like. It may be understood that, in this embodiment of this application, the processor 110 may be connected to different modules of the electronic device 100 by using interfaces, so that the electronic device 100 can implement different functions, for example, photographing and processing. It should be noted that a connection manner of an interface in the electronic device 100 is not limited in this embodiment of this application.
The SIM card interface 152 may be configured to connect to a SIM card. The SIM card may be inserted into the SIM card interface 152 or detached from the SIM card interface 152, to implement contact with or separation from the electronic device 100. The electronic device 100 may support one or N SIM card interfaces, where N is a positive integer greater than 1. The SIM card interface 152 may support a nano SIM card, a micro SIM card, a SIM card, and the like. A plurality of cards may be simultaneously inserted into one SIM card interface 152. The plurality of cards max be of a same type, or may be of different types. The SIM card interface 152 may also be compatible with different types of SIM cards. The SIM card interface 152 may also be compatible with an external memory card, the electronic device 100 may interact with a network by using the SIM card, to implement a call function, a data communication function, and the like. In some embodiments, the electronic device 100 may alternatively use an eSIM, namely, an embedded SIM card. The eSIM card may be embedded in the electronic device 100, and cannot be separated from the electronic device 100.
The USB interface 170 is an interface that conforms to USB standard specifications. For example, the USB interlace 170 may include a mini USB interface, a micro USB interface, or a USB type C interface. The USB interface 170 may be configured to connect to a charger to charge the electronic device 100, or may be configured to perform data transmission between the electronic device 100 and a peripheral device, or may be configured to connect to a headset to play audio by using the headset. The USB interface 170 may be further configured to connect to another electronic device, for example, an augmented reality (augmented reality, AR) device.
The charging management module 180 is configured to receive a charging, input from the charger. The charger may be a wireless charger, or may be a wired charger. In some embodiments of wired charging, the charging management module 180 may receive a charging input from the wired charger through the USB interface 170. In some embodiments of wireless charging, the charging management module 180 may receive wireless charging input by using a wireless charging coil of the electronic device 100. The charging management module 140 supplies power for the electronic device 100 by using the power management module 181 while charging the battery 182.
The power management module 181 is configured to connect to the battery 182, the charging management module 180, and the processor 110. The power management module 181 receives an input of the battery 182 and/or the charging management module 180, and supplies power to the processor 110, the internal memory 121, an external memory, the display screen 151, the camera 153, the mobile communications module 131, the wireless communications module 132, and the like. The power management module 181 may be further configured to monitor parameters such as a battery capacity, a battery cycle count, and a battery health status (electric leakage or impedance). In some other embodiments, the power management module 181 may alternatively be disposed in the processor 110. In some other embodiments, the power management module 181 and the charging management module 180 may alternatively be disposed in a same device.
It should be understood that a hardware structure of the electronic device 100 shown in
Embodiments of this application provide an audio control method, so that an electronic device can be controlled through audio. Compared with a conventional operation manner, this helps simplify operation steps of a user and improve user experience. The following describes the embodiments of this application in detail by using the electronic device 100 as an example.
For example, as shown in
When the obtained score of the first audio signal is less than the first threshold and greater than a second threshold, the electronic device 100 does not unlock the electronic device 100, but prompts the user to perform security authentication. For example, when the obtained score of the first audio signal is less than the first threshold and greater than the second threshold, the electronic device 100 displays a lock screen interface on the display screen 151. The lock screen interface is used to prompt the user to perform security authentication. For example, the lock screen interface may be a user interface 210 shown in
In some embodiments, after the security authentication succeeds, the electronic device 100 may automatically unlock the electronic device 100 and perform the operation corresponding to “WeChat Pay”. For example, when the keyword of the first audio signal is “WeChat Pay”, the operation performed by the electronic device 100 may be displaying the QR code interface of WeChat Money on the display screen 151. In some other embodiments, after the security authentication succeeds, the electronic device 100 may further automatically unlock the electronic device 100 and display an unlocked interface on the display screen 151. For example, after the electronic device 100 successfully performs security authentication through the lace recognition, the electronic device 100 automatically unlocks the electronic device 100, and displays an unlocked interlace on the display screen 151. For example, the unlocked interface may be a user interface 500 shown in
When the obtained score of the first audio signal is less than or equal to the second threshold, the electronic device 100 does not perform automatic unlocking and the operation corresponding to “WeChat Fay”. In some embodiments, when the obtained score of the first audio signal is less than or equal to the second threshold, the electronic device 100 may further prompt the user that recognition of the audio signal fails. For example, the electronic device 100 may prompt, in a voice manner, the user that the recognition of the audio signal tails. For example, the electronic device 100 may play first voice prompt information by using the microphone 140C or a microphone of a headset connected to the electronic device 100. The first voice prompt information is used to prompt the user that the recognition of the audio signal fails. For example, the first voice prompt information may be “try again”. For another example, when the score of the first audio signal determined by the electronic device 100 is relatively low because of ambient noise, the first voice prompt information may further be “please move to a quiet place and try again”. In addition, in this embodiment of this application, when the obtained score of the first audio signal is less than or equal to the second threshold, the electronic device 100 may further display prompt information in the lock screen interface to prompt the user that the recognition of the audio signal fails.
In addition, it should be noted that in this embodiment of this application, the first threshold and the second threshold may be preset on the electronic device 100, and values of the first threshold and the second threshold may be correspondingly set based on an actual requirement. For example, when the score of the audio signal indicates a possibility of a voice made by a preset user, the first threshold may be preset to 0.95, and the second threshold may be preset to 0.85.
The electronic device 100 obtains the first audio signal when a screen is off, and executes the operation corresponding to the keyword of the first audio signal. In addition, the electronic device 100 may further obtain the first audio signal when the display screen 151 displays the kick screen interface. After the electronic device 100 obtains the first audio signal when the lock screen interface is displayed on the display screen 151, for a process in which the electronic device 100 subsequently performs an operation corresponding to the keyword of the first audio signal, refer to a process in which the electronic device 100 subsequently performs the operation corresponding to the keyword of the first audio signal after the electronic device 100 obtains the first audio signal when the screen is off.
It should be noted that the first audio signal may be a user's voice, where the user's voice may be collected by the electronic device 100 by using the microphone 140C of the electronic device 100, or may be a user's voice that is collected by a headset connected to the electronic device 100 and that is sent to the electronic device 100. In some embodiments, the first audio signal includes the riser's voice and another signal. For example, when the electronic device 100 is connected to a bone conduction headset, the first audio signal includes the user's voice and a bone conduction signal, where the bone conduction signal is a voice generated by vibration of an ear bone when the user makes a voice. Because the bone conduction signal is the voice generated by vibration of the ear bone when the user makes the voice, the electronic device 100 can identify that the voice is made by a person. In addition, when different people make voices with a same keyword, vibration of the ear bones have different positions, distances, or the like, so that the bone conduction signal is unique. The electronic device 100 performs voiceprint recognition by combining the bone conduction signal with the users voice, and can further strengthen identification of the user identity, to help improve security. In addition, when the electronic device 100 is a wearable watch, another signal may be a voice generated by a pulse beat. By combining another signal and the users voice, the electronic device 100 can identify an instruction given by the user, and identify an identity of the user who makes the voice, to help improve security of controlling the electronic device 100 through the audio. It should be further noted that the another signal in this embodiment of this application may alternatively be an optical signal, a temperature, or the like. An expression form of the another signal is not limited in this embodiment of this application.
In this embodiment of this application, because the electronic device 100 can execute the user instruction by obtaining the audio signal, the user can automatically unlock and perform the corresponding operation based on the obtained audio signal when the screen of the electronic device is off or the screen of the electronic device is locked, to help simplify operation steps of the user and improve user experience. In addition, in this embodiment of this application, the first threshold and the second threshold are set, to help reduce a quantity of times that the electronic device 100 rejects a user request sent through the audio, and to improve user experience. For example, when the obtained audio signal is affected by ambient noise or a change of the user's own voice, voiceprint recognition of the audio signal may be affected by the audio signal, and a determined score of the audio signal is slightly lower than the first threshold. If the electronic device 100 directly rejects the user request, a misjudgment may be caused. Therefore, in this embodiment of this application, the second threshold is further set. It should be noted that, in this embodiment of this application, the first threshold and the second threshold may be the same. When the first threshold and the second threshold are the same, and when the obtained score of the audio signal is greater than the first threshold, the electronic device 100 may perform the operation corresponding to the keyword of the audio signal. When the score of the obtained audio signal is less than or equal to the first threshold, the electronic device 100 displays the lock screen interlace on the display screen 151. Specifically, for a specific implementation of the lock screen interface, refer to the foregoing related descriptions.
It should be understood that in this embodiment of this application, different thresholds may be set for audio signals of different keywords, or a same threshold may be set for audio signals of different keywords. In this embodiment of this application, when the different thresholds are preset for the different audio signals of keywords, the electronic device 100 may first identify a keyword in an audio signal, and then search, based on the keyword, for a threshold set for the keyword.
For example, as shown in
In addition, it should be noted that, for a manner of obtaining the second audio signal by the electronic device 100, refer to the manner of obtaining the first audio signal by the electronic device 100.
The first audio signal is used as an example. The electronic device 100 may perform voiceprint recognition on the first audio signal in the following manner, to obtain the score of the first audio signal;
It should be noted that in this embodiment of this application, the electronic device 100 may alternatively perform voiceprint recognition on the first audio signal in another manner, to obtain the score of the first audio signal. This is not limited.
Step 801: The electronic device 100 obtains a first audio signal when the electronic device 100 is not unlocked. When the electronic device 100 is not unlocked, the electronic device 100 may be screen-off, or may display a lock screen interface on the display screen 151. Specifically, the first audio signal may be collected by the electronic device 100 by using the microphone 140C of the electronic device 100, or may be collected and reported to the electronic device 100 by a headset or another device connected to the electronic device 100.
Step 802: The electronic device 100 performs voice recognition on the first audio signal, to obtain a keyword of the first audio signal. If should be noted that, when the first audio signal is collected and reported to the electronic device 100 by the headset or the another device connected to the electronic device 100, the headset or the another device connected to the electronic dev ice 100 may perform voice recognition on the first audio signal, and report the keyword of the recognized first audio signal to the electronic device 100. Alternatively, the electronic device 100 may perform voice recognition on the first audio signal to obtain the keyword of the first audio signal. The following uses an example in which the electronic device 100 performs voice recognition for description. For related descriptions of performing voice recognition by the headset or the another device connected to the electronic device 100, refer to related descriptions of performing voice recognition by the electronic device 100.
In some embodiments, the electronic device 100 performs voice recognition on the first audio signal, and when the recognition of the keyword of the first audio signal fails, may play second voice prompt information by using the microphone 140C, a microphone connected to the electronic device 100, or the like. The second voice prompt information may be used to prompt the user that the recognition of the keyword of the first audio signal fails. For example, the second voice prompt information may be “It is not clear. Please try it again”. Alternatively, the electronic device 100 may display prompt information in the lock screen interface, to prompt the user that the recognition of the keyword of the first audio signal fails. For example, when the screen is off, the electronic device 100 performs voice recognition on the first audio signal. When the recognition of the keyword of the first audio signal fails, the electronic device 100 lights up the screen, and displays the lock screen interface on the display screen 151. The lock screen interface includes prompt information, and the prompt information is used to prompt the user that the electronic device 100 fails to recognize the keyword.
For example, w hen recognizing that the keyword of the first audio signal is “WeChat Pay”, the electronic device 100 may execute, based on the keyword “WeChat Pay”, a service process of invoking a WeChat application installed on the electronic device 100. In some embodiments, when recognizing that the keyword of the first audio signal is “WeChat Pay”, if detecting that no WeChat application is installed on the electronic device 100, the electronic device 100 may automatically invoke an application store to download the WeChat application from the application store. In some other embodiments, when the electronic device 100 recognizes that the keyword of the first audio signal is “WeChat Pay”, if detecting that no WeChat application is installed on the electronic device 100, the electronic device 100 prompts the user that no WeChat application is installed on the electronic device 100. Specifically, the electronic device 100 may prompt, through a voice, the user that the WeChat application is not installed on the electronic device 100, or may prompt, by displaying prompt information, the user that the WeChat application is not installed.
Step 803: The electronic device 100 determines, based on the keyword of the first audio signal, a user keyword voiceprint model corresponding to the keyword of the first audio signal. In some embodiments, the keyword voiceprint model corresponding to the keyword of the first audio signal may be found based on the keyword of the first audio signal by the electronic device 100 from at least one user keyword voiceprint model pre-configured on the electronic device 100, or may be a general background model or the like.
Step 804: The electronic device 100 performs feature extraction on the first audio signal to obtain a voiceprint feature of the first audio signal.
For example, the voiceprint feature of the first audio signal may include a filter bank feature (filter bank feature), a mel-frequency cepstral coefficient (mel-frequency cepstral coefficient, MFCC), perceptual linear prediction (perceptual linear prediction, PLP), linear predictive coding (linear predictive codes, LPC), or the like, or may include an audio signal bottleneck feature extracted according to a voiceprint deep learning algorithm, or the like.
It should be noted that there is no necessary sequence between the step 804, the step 802, and the step 803, but the step 802, the step 803, and the step 804 are performed before the step 805.
Step 805: The electronic device 100 matches the voiceprint feature of the first audio signal with the user keyword voiceprint model corresponding to the keyword of the first audio signal, to obtain the score of the first audio signal.
For example, when the first audio signal is the user's voice collected by the microphone 151 of the electronic device 100 or the microphone of the headset connected to the electronic device 100, the user keyword voiceprint model corresponding to the keyword of the first audio signal is a voiceprint model in which the user makes a voice of the keyword. For another example, when the first audio signal includes the user's voice and another signal, for example, the another signal is a hone conduction signal collected by the headset connected to the electronic device 100, the user keyword voiceprint model corresponding to the keyword of the first audio signal includes the voiceprint model in which the user makes the voice of the keyword and a voiceprint model of the bone conduction signal obtained when the user makes the voice of the keyword.
That the first audio signal includes the users voice and the bone conduction signal is used as an example to describe in detail the score of the first audio signal that is obtained by the electronic device 100.
For example, as shown in
It should be noted that, in this embodiment of this application, when the first audio signal includes the user's voice and another signal, for a manner of calculating the score of the first audio signal, refer to the manner of calculating the score of the first audio signal when the first audio signal includes the user's voice and the bone conduction signal.
Certainly, in this embodiment of this application, when the first audio signal includes the user's voice and the another signal, the another signal may be one type of signal, or may be a plurality of types of signals, this is not limited.
Step 806: When the score of the first audio signal is greater than or equal to the first threshold, the electronic device 100 automatically unlocks the electronic device 100 and performs an operation corresponding to the keyword of the first audio signal.
Step 807: When the score of the first audio signal is less than the first threshold and greater than the second threshold, the electronic device 100 displays the lock screen interface on the display screen 151, to prompt the user to perform security authentication. After the user security authentication succeeds, the electronic device 100 unlocks the electronic device 100 and performs the operation corresponding to the keyword of the first audio signal.
Step 808: When the score of the first audio signal is less than or equal to the second threshold, the electronic device 100 does not unlock the electronic device 100 and does not perform the operation corresponding to the keyword of the first audio signal, in some embodiments, when the score of the first audio signal is less than or equal to the second threshold, the electronic device 100 further prompts the user that the recolonization of the first audio signal fails.
It should be understood that, for the step 806 to the step 808, refer to the foregoing related descriptions of obtaining the first audio signal by the electronic device 100 when the screen is off. Details are not described herein again.
In this embodiment of this application, the electronic device 100 may pre-configure the user keyword voiceprint model. The keyword “WeChat Pay” is used as an example. The electronic device 100 may pre-configure. based on a pre-recorded audio signal whose keyword is “WeChat Pay”, a user keyword voiceprint model corresponding to “WeChat Pay”.
For example, the electronic device 100 may record the audio signal of “WeChat Pay” in the following manner:
As shown in
For example, the user may say “WeChat Pay” to the electronic device 100 based on guidance in the user interface displayed on the electronic device 100, so that the electronic device 100 records the audio signal of “WeChat Pay”. For another example, the user may also say “WeChat Pay” to a headset connected to the electronic device 100, so that the electronic device 100 records the audio signal of “WeChat Pay”. Generally, the user may say “WeChat Pay” bawd on a prompt displayed by the electronic device 100 on the display screen 151. For example, when the electronic device 100 needs to continuously record the audio signal of “WeChat Pay” for three times, the electronic device 100 may display, on the display screen 151 after the audio signal of “WeChat Pay” spoken by the user is successfully recorded for the first time, prompt information that requires to speak “WeChat Pay” again. In some embodiments, alter obtaining the audio signal, the electronic device 100 performs voice recognition to determine whether a keyword of the obtained audio signal is consistent with the keyword “WeChat Pay” that the electronic device 100 requires the user to speak. If the keywords are consistent, subsequent steps may be performed. If the keywords are inconsistent, the audio signal obtained this time is discarded. In some embodiments, when the electronic device 100 determines that the keyword of the obtained audio signal is inconsistent with the keyword “WeChat Pay” that the electronic device 100 requires the user to say, the electronic device 100 may further prompt the user that the keyword spoken is incorrect. For example, the electronic device 100 may prompt, through a voice, the user that the keyword spoken is incorrect, or may prompt, by displaying prompt information on the display screen 151, the user that the keyword spoken is incorrect. In addition, in this embodiment of this application, the electronic device 100 may further perform signal quality detection after obtaining the audio signal, and perform keyword matching when signal quality of the obtained audio signal is greater than a preset threshold. Certainly, in this embodiment of this application, the electronic device 100 may alternatively perform signal quality detection after keyword matching succeeds. Signal quality detection helps improve reliability of determining the user keyword voiceprint model.
In this embodiment of this application, when the signal quality of the obtained audio signal is less than or equal to the preset threshold, the electronic device 100 gives up the currently obtained audio signal. In some embodiments, when the signal quality of the obtained audio signal is less than or equal to the preset threshold, the electronic device 100 may further prompt the user to move to a quiet place to record the audio signal. Specifically, the electronic device 100 may prompt, by playing a voice by using a microphone or displaying prompt information on the display screen 151, the user to move to the quiet place to record the audio signal. In this embodiment of this application, the user may be further prompted in another manner. This is not limited.
After the electronic device 100 successfully records audio signals of “WeChat Pay” for three consecutive times, the user interface 020 may be displayed. At this point, the virtual button 922 on the user interface 920 is turned on. In some other embodiments, when the electronic device 100 successfully collects the audio signals of “WeChat Pay” for three consecutive times, if audio signals of unlocking and Alipay payment have not been recorded, a user interlace 1000 shown in
When the user does not turn on the virtual button 921 for the first time, the electronic device 100 may directly turn on a “WeChat Pay” function in response to an operation of turning on the virtual button 921 by the user. In some embodiments, when the user does not turn on the virtual button 921 for the first time, in response to the operation of turning on the virtual button 921 by the user, the electronic device 100 may further pop up an audio signal to prompt the user whether to record “WeChat Pay” again. For example, as shown in
In some embodiments, if detecting that a login account changes, the electronic device 100 may reset the virtual button on the user interface 920. For example, the user may switch an account by using a login account on a user interlace 910 shown in
In some other embodiments, the user interface 920 further includes a virtual button 922 for adding a new operation instruction, lifts helps the user add the new operation instruction, and further improves user experience. The electronic device 100 may add or delete the new operation instruction in response to an operation on the virtual button 922, for example, bus card payment. When the virtual button used to turn on or turn oil the bus card payment is turned on, the user may say “bus card payment”, so that the electronic device 100 displays a bus card payment interface.
For example,
Step 1201: An electronic device 100 obtains an audio signal that is recorded based on a keyword prompted by the electronic device 100.
For example, the electronic device 100 may prompt the user with a keyword in a user interface 930 shown in
Step 1202: The electronic device 100 performs signal quality defection on the recorded audio signal.
During specific implementation, in this embodiment of this application, the electronic device 100 may perform signal quality detection on the recorded audio signal in the following manner:
The electronic device 100 may compare a signal-to-noise ratio of the recorded audio signal with a preset threshold. For example, when the signal-to-noise ratio of the recorded audio signal is greater than the preset threshold, the signal quality detection succeeds When the signal-to-noise ratio of the recorded audio signal is less than or equal to the preset threshold, the signal quality detection fails. In some embodiments, when the signal quality detection fails, the audio data recorded this time is given up, and the user is prompted that the audio signal recorded by the user is invalid, and the audio signal is recorded again. For example, the electronic device 100 may prompt, through a voice, the user that the audio signal recorded is invalid, and the audio signal is recorded again. For another example, the electronic device 100 may further prompt, by displaying prompt information on the display screen 151, the user that the audio signal recorded is invalid, and the audio signal is recorded again.
It should be noted that the preset threshold may be set before the electronic device 100 is delivered from a factory, or may be obtained after a large quantity of audio signals recorded by the user are analyzed according to a preset algorithm.
In addition, it should be further noted that the signal quality detection may alternatively be detection based on low noise energy, speech energy, or the like. A parameter used for the signal quality detection is not limited in this embodiment of this application.
Step 1203: After the signal quality detection succeeds, the electronic device 100 determines whether the keyword of the recorded audio signal is consistent with the keyword prompted by the electronic device 100. If the keywords are consistent, a step 1204 is performed. If the keywords are inconsistent, the recorded audio signal is abandoned. In some embodiments, when the keywords are inconsistent the electronic device 100 further prompts the user that the keyword spoken by the user is invalid. A manner of prompting the user that the keyword is invalid is not limited in this embodiment of this application. The electronic device 100 may identify the keyword from the recorded audio signal through voice recognition.
It should be noted that a sequence of performing signal quality detection and keyword determining by the electronic device 100 is not limited in this embodiment of this application.
Step 1204: The electronic device 100 performs feature extraction on the recorded audio signal, to obtain a voiceprint feature of the recorded audio signal.
Step 1205: The electronic device 100 determines a user keyword voiceprint model based on the voiceprint feature of the recorded audio signal and a background model pre-stored on the electronic device 100.
In this embodiment of this application, the background model is a model and a related parameter that are trained offline by using a large amount of collected keyword data of speakers, and may be a gaussian mixture model (Gaussian Mixture model, GMM) and a first related parameter. For example, the first correlation parameter may include a gaussian mixture distribution parameter, an adaptive adjustment factor, and the like. For example, the background model may also be an UBM-ivector (universal background model-ivector) model and a second related parameter, and the second related parameter may include a gaussian mixture distribution parameter, a total variability space matrix (total variability space matrix), and the like. For another example, the background model may alternatively be a DNN-ivector (deep neural networks-ivector) model and a third related parameter, and the third related parameter may include a DNN-based network structure, a weight, a total variability space matrix, and the like. For another example, the background model may further be an end-to-end model and a parameter such as an x-vectors algorithm based on deep learning, or even a combination oi the foregoing plurality of models and parameters such as a combination of the parameters and the GMM and the DNN-ivector. The user keyword voiceprint model is used to indicate that in this case, the audio signal pre-recorded by the user on the electronic device 100 is used to adaptively adapt the general background model and the parameter, to reflect a user feature, and is used to subsequently perform comparison after a user enters the audio signal, to determine whether the user is the same as the user who pre-records the audio signal on the electronic device 100.
Step 1206: The electronic device 100 stores the user keyword voiceprint model. In some embodiments, the electronic device 100 stores the user keyword voiceprint model in a secure zone. For example, the security zone may be a trusted execution environment (trusted execution environment, TEE) of Android.
It may be understood that in this embodiment of this application, different keywords correspond to different, user keyword voiceprint models.
When the recorded audio signal includes the user's voice and another signal that are collected by the microphone, the user keyword voiceprint model pre-configured on the electronic device 100 includes a voiceprint model of the user's keyword voice and a voiceprint model of another signal of the user's keyword.
The following provides detailed description by using an example in which the recorded audio signal includes the user's voice and the bone conduction signal that are collected by the microphone.
For example,
Step 1301: An electronic device 100 obtains an audio signal that is recorded based on a keyword prompted by the electronic device 100. The recorded audio signal includes a user's voice and a bone conduction signal that are collected by a microphone.
Step 1302: The electronic device 100 determines whether a keyword of the recorded audio signal is consistent with the keyword prompted by the electronic device 100, and if the keyword of the recorded audio signal is consistent with the keyword prompted by the electronic device 100, performs a step 1303. When the key word of the recorded audio signal is inconsistent with the keyword prompted by the electronic device 100, the audio signal recorded this time is abandoned. In some embodiments, when the keyword of the recorded audio signal is inconsistent with the keyword prompted by the electronic device 100, the electronic device 100 further prompts the user that the keyword spoken is incorrect. For a specific prompt manner, refer to the foregoing related descriptions.
Step 1303: The electronic device 100 performs signal quality detection. Specifically, the electronic device 100 separately performs signal quality detection on the bone conduction signal and the user's voice, if signal quality detection of either the bone conduction signal or the user's voice fails in a signal quality detection process, the electronic device 100 abandons the currently recorded audio signal, and after both the bone conduction signal and the user's voice succeeds in the signal quality detection, a step 1304 is performed.
Step 1304: The electronic device 100 separately performs feature extraction on the bone conduction signal and the user's voice, to obtain a voiceprint feature of the bone conduction signal and a voiceprint feature of the user's voice.
Step 1305: The electronic device 100 determines a voiceprint model of a user keyword bone conduction signal based on the voiceprint feature of Use bone conduction signal and a general background model of a bone conduction signal pre-stored on the electronic device 100, and determines a voiceprint model of a user keyword voice based on the voiceprint feature of the user's voice and a general background model of a voice that is collected by the microphone and that is pre-stored on the electronic device 100.
Step 1306: The electronic device 100 stores the voiceprint model of the user keyword voice and the voiceprint model of the user keyword bone conduction signal.
It may be understood that the method for pre-configuring the user keyword voiceprint model shown in
The embodiments in the embodiments of this application may be used separately, or may be used in combination, to achieve different technical effects.
In the embodiments provided in this application, the method provided in the embodiment of this application is described from a perspective in which the electronic device 100 is used as an execution body. To implement functions in the method provided in the embodiments of this application, the terminal may include a hardware structure and/or a software module, and implement the foregoing functions in a form of the hardware structure, the software module, or a combination of the hardware structure and the software module. Whether a specific function in the foregoing functions is performed by the hardware structure, the software module, or the combination of the hardware structure and the software module depends on a specific application and a design constraint of the technical solutions.
Based on a same concept,
when the electronic device 1400 is not unlocked, receiving a first audio signal, where the first audio signal includes a first voice signal of a user, and the first voice signal includes a keyword for requesting the electronic device to perform a first operation; performing voiceprint recognition on the first audio signal to determine a score of the first audio signal; when the score of the first audio signal is greater than or equal to a first threshold, unlocking the electronic device and performing the first operation; when the score of the first audio signal is less than a first threshold and greater than a second threshold, prompting the user to perform security authentication in a manner other than a voice manner; and after the security authentication performed by the user succeeds, unlocking the electronic device and per forming the first operation.
In some embodiments, the user may be prompted, in the following manner, to perform security authentication in the manner other than the voice manner:
displaying a lock screen interface of the electronic device 1400, where the lock screen interface is used to prompt the user to perform security authentication in the manner other than the voice manner. For example, the electronic device 1400 may include a display screen. The lock screen interface of the electronic device 1400 is displayed by using the display screen included in the electronic device 1400, or the lock screen interface of the electronic device 100 may be displayed by using a display device that has a display function and that is connected to the electronic device 100 in a wired or wireless manner.
In some embodiments, the method further includes an instruction used to: when the score of the first audio signal is less than or equal to the second threshold, skip unlocking the electronic device and skip performing the first operation.
In some embodiments, the method includes an instruction used to send first voice prompt information when the score of the first audio signal is less than or equal to the second threshold, where the first voice prompt information is used to prompt the user that recognition of the first audio signal fails, and/or an instruction used to display first prompt information in the lock screen interface when the score of the first audio signal is less than or equal to the second threshold, w here the first prompt information is used to prompt the user that recognition of the first audio signal fails.
In some embodiments, voiceprint recognition may be performed on the first audio signal in the following manner, to determine the score of the first audio signal; determining, from at least one pre-con figured user keyword voiceprint model, a user keyword voiceprint model corresponding to a keyword included in the first audio signal; and extracting a voiceprint feature of the first audio signal, and matching the extracted voiceprint feature with the determined user keyword voiceprint model corresponding to the keyword of the first audio signal, to determine the score of the first audio signal.
In some embodiments, the method further includes: an instruction used to receive a second audio signal of the user, where the second audio signal includes a second voice signal, and the second voice signal is spoken by the user based on a keyword prompted by the electronic device; an instruction used to: when identifying that a keyword included in the second audio signal is consistent with a keyword prompted by the electronic device, extract a voiceprint feature of the second audio signal; and an instruction used to configure, based on the voiceprint feature of the second audio signal and a pre-stored background model, a user keyword voiceprint model corresponding to the keyword prompted by the electronic device.
In some embodiments, the method further includes an instruction, used to: when identifying that the keyword included in the second audio signal is inconsistent with the keyword prompted by the electronic device, prompt the user that the keyword is incorrect.
In some embodiments, the voiceprint feature of the first audio signal includes at least one of MFCC, PLP, or LPC.
In some embodiments, the first audio signal may be collected and reported to the electronic device 1400 by a headset connected to the electronic device. Alternatively, the first audio signal may be collected by the electronic device 1400 by using a microphone of the electronic device 1400.
In some other embodiments, when the headset is a bone conduction headset, the first audio signal further includes a bone conduction signal, and the bone conduction signal is a voice generated by vibration of an ear bone when the user makes a voice.
It should be understood that the electronic device 1400 may be configured to implement the audio control method in the embodiments of this application. For related features, refer to the foregoing descriptions. Details are not described herein again.
A person of ordinary skill in the art may clearly know that the embodiments of this application may be implemented through hardware, firmware, or a combination thereof. When the embodiments of this application are implemented by software, the foregoing functions may be stored in a computer-readable medium or transmitted as one or more instructions or code in a computer-readable medium. The computer-readable medium includes a computer storage medium and a communications medium. The communications medium includes any medium that enables a computer program to be transmitted from one place to another. The storage medium may be any available medium accessible by a computer. The computer-readable medium may include, by way of example but not as limitation, a RAM, a ROM, an electrically erasable programmable read only memory (electrically erasable programmable read only memory, EEPROM), a compact disc read-only memory (compact disc read-Only memory. CD-ROM) or another compact disc storage, a magnetic disk storage medium or another magnetic storage device, or any other medium that can be configured to carry or store desired program code in a form of an instruction or a data structure and that can be accessed by a computer. In addition, any connection may be appropriately defined as a computer-readable medium. For example, if software is transmitted from a website, a server, or another remote source by using a coaxial cable, an optical fiber/cable, a twisted pair, a digital subscriber line (digital subscriber line. DSL), or wireless technologies such as infrared, radio, and microwave, the coaxial cable, the optical fiber/cable, the twisted pair, the DSL, or the wireless technologies such as infrared, radio, and microwave are included in a definition of a medium to which the coaxial cable, the optical fiber/cable, the twisted pair, the DSL, or the wireless technologies such as the infrared ray, the radio, and the microwave belong. A disk (disk) and a disc (disc) that are used in the embodiments of this application include a compact disc (compact disc, CD), a laser disc, an optical disc, a digital video disc (digital video disc, DVD), a floppy disk, and a Blu-ray disc. The disk usually magnetically copies data, and the disc optically copies data in a laser manner. The foregoing combination shall also be included in the protection scope of the computer-readable medium.
In summary, what are described above are merely embodiments of this application, but are not intended to limit the protection scope of this application. Any modification, equivalent replacement, or improvement made according to the disclosure of this application shall fall within the protection scope of this application.
Number | Date | Country | Kind |
---|---|---|---|
201811291610.6 | Oct 2018 | CN | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/CN2019/114175 | 10/30/2019 | WO | 00 |