This application relates to the field of a short range communication technology, and more specifically, to a wireless headset and an audio device.
With the development of wireless communication technologies and audio processing technologies, more users prefer to use wireless headsets for audio playing due to advantages such as portability and convenience of the wireless headsets. To improve the audio playing effect and provide audiences with an immersive feeling, for example, to truly restore the recorded sound source and create a stereo sound to make audiences feel like in a real sound scene, Binaural audio, “binaural recording technology” for brief, was proposed. The binaural recording technology aims to separately capture sounds near a left ear and a right ear of a person, and process the captured sounds near the left ear and the right ear, to simulate acoustic effect of listening with human ears.
In a binaural recording technology currently applied to a wireless headset, the captured left-ear audio data and the captured right-ear audio data usually need to be packaged and then transmitted to a processing device via an air interface of one of the headsets. Because a signal transmission capability of an air interface of the headset is limited, the captured audio data needs to be compressed. In this way, sound quality of the recorded audio data is reduced. In addition, as one of the wireless headsets is used for an audio data transmission, power consumption of the headset for the audio data transmission is excessively high. As a result, power consumption of the left and right headsets is unbalanced, and a battery life capability of the headset device is reduced. Therefore, in a binaural recording scenario, how to improve performance of a wireless headset becomes a problem that needs to be resolved.
A wireless headset and an audio device provided in this application can improve performance of the wireless headset.
According to a first aspect, this application provides a wireless headset. The wireless headset includes: a first headset, including: a first microphone, configured to obtain, through voice capturing, first audio data corresponding to a first channel; a first encoder, configured to encode the first audio data to obtain a first audio packet; and a first wireless transceiver, configured to transmit the first audio packet to an audio device over a first communication connection; and a second headset, including: a second microphone, configured to obtain, through voice capturing, second audio data corresponding to a second channel; a second encoder, configured to encode the second audio data to obtain a second audio packet; and a second wireless transceiver, configured to transmit the second audio packet to the audio device over a second communication connection. The first communication connection is different from the second communication connection. The first channel and the second channel are used to implement stereo sound effect.
It should be noted that, the first communication connection being different from the second communication connection means that data transmission paths are different. That is, the first headset and the second headset respectively perform a data transmission with the audio device.
The wireless headsets herein can be true wireless stereo headsets or other types of wireless headsets.
The first headset and the second headset in the wireless headset described in this application may respectively establish a connection with the audio device, and may respectively perform a data transmission with the audio device independently. Therefore, the first headset and the second headset in the wireless headset are decoupled. Data captured by one of the headset does not need to be transmitted to the audio device through the other headset. In this way, power consumption of two headsets in the wireless headset is balanced. In addition, both of the two headsets in the wireless headset can encode audio data and transmit the encoded audio data to the audio device. Therefore, compared with using one headset to communicate with the audio device, a transmission amount of audio data in a transmission cycle can be increased, thereby reducing a compression amount of audio data, and improving audio quality. Further, one of headsets does not need to transmit the obtained audio data to the other headset. Therefore, for a real-time audio processing scenario, a delay of data transmission between the headset and the audio device can be reduced, which helps improve user experience.
Based on the first aspect, in a possible implementation, the first wireless transceiver is further configured to: receive first indication information from the audio device, and determine, based on the first indication information, a first communication period used by the first wireless transceiver to transmit the first audio packet in a communication cycle. The second wireless transceiver is further configured to: receive second indication information from the audio device, and determine, based on the second indication information, a second communication period used by the second wireless transceiver to transmit the second audio packet in the communication cycle.
In a first possible implementation, the first communication period and the second communication period are time division periods in the communication cycle. In this case, the audio device configured to communicate with the first wireless transceiver and the second wireless transceiver may be provided with one antenna, or may be provided with a plurality of antennas. The first communication period and the second communication period are set as time-division periods, so that the first wireless transceiver and the second wireless transceiver may alternately transmit the first audio packet and the second audio packet to the audio device in the communication cycle. In a specific implementation, the first communication period and the second communication period may be set as closely adjacent communication periods on a time axis. For example, one communication cycle is 400 μs, that is, four 100 μs are included. A first 100 μs is the first communication period, a second 100 μs is the second communication period, a third 100 μs is the first communication period, and a fourth 100 μs is the second communication period. The first communication period and the second communication period are set as closely adjacent communication periods on the time axis, so that data transmission efficiency can be improved, and a communication delay can be reduced.
In a second possible implementation, the first communication period and the second communication period are same time periods in the communication cycle. In this implementation, the audio device that communicates with the first wireless transceiver and the second wireless transceiver may be provided with a plurality of antennas. A first antenna of the plurality of antennas is configured to receive the first audio packet from the first wireless transceiver, and a second antenna of the plurality of antennas is configured to receive the second audio packet from the second wireless transceiver.
It should be noted that, after the first wireless transceiver transmits the first audio packet to the audio device in the first communication period, the first wireless transceiver may further receive, from the audio device, indication information indicating whether the first audio packet is successfully transmitted. After the second wireless transceiver transmits the second audio packet to the audio device in the second communication period, the second wireless transceiver may further receive, from the audio device, indication information indicating whether the second audio packet is successfully transmitted.
Based on the first aspect, in a possible implementation, the first communication period includes: at least one first communication sub-period and at least one second communication sub-period. The at least one first communication sub-period is used to transmit the first audio packet to the audio device, and the at least one second communication sub-period is used to retransmit the first audio packet to the audio device when the first audio packet is unsuccessfully transmitted. An example in which one communication cycle is 400 μs, that is, four 100 μs are included, is used for description. When the first communication period and the second communication period are time division periods in the communication cycle, the first 100 μs is the first communication sub-period, and the third 100 μs is the second communication sub-period. In this case, the first wireless transceiver transmits the first audio packet at the first 100 μs. In addition, within the first 100 μs, the first wireless transceiver may further receive, from the audio device, indication information indicating whether the first audio packet is successfully transmitted. When determining that the first audio packet is unsuccessfully transmitted, the first wireless transceiver retransmits the first audio packet to the audio device at the third 100 μs. When the first communication period and the second communication period are the same periods in the communication cycle, the first 100 μs is the first communication sub-period, and the second 100 μs is the second communication sub-period. In this case, the first wireless transceiver transmits the first audio packet at the first 100 μs. In addition, within the first 100 μs, the first wireless transceiver may further receive, from the audio device, indication information indicating whether the first audio packet is successfully transmitted. When determining that the first audio packet is unsuccessfully transmitted, the first wireless transceiver retransmits the first audio packet to the audio device at the second 100 μs.
Based on the first aspect, in a possible implementation, the second communication period includes: at least one third communication sub-period and at least one fourth communication sub-period. The at least one third communication sub-period is used to transmit the second audio packet to the audio device, and the at least one fourth communication sub-period is used to retransmit the second audio packet to the audio device when the third audio packet is unsuccessfully transmitted. Specific implementation is the same as that of the first communication sub-period and the second communication sub-period. For detailed description, refer to related descriptions of the first communication sub-period and the second communication sub-period. Details are not described herein again.
Based on the first aspect, in a possible implementation, the first headset further includes: a first decoder and a first speaker, and the second headset further includes a second decoder and a second speaker; the first wireless transceiver is further configured to: receive the third audio packet from the audio device, where the third audio packet is generated after the audio device performs audio data processing on the first audio data in the first audio packet; the first decoder is configured to decode the third audio packet to obtain third audio data; the first speaker is configured to perform playing based on the third audio data; the second wireless transceiver is further configured to: receive a fourth audio packet from the audio device, where the fourth audio packet is generated after the audio device performs audio data processing on the second audio data in the second audio packet; the second decoder is configured to decode the fourth audio packet to obtain fourth audio data; and the second speaker is configured to perform playing based on the fourth audio data.
Based on the first aspect, in a possible implementation, the first decoder is further configured to: decode the third audio packet to obtain third indication information, determine a first start time based on the third indication information, and control the first speaker to perform playing at the first start time based on the third audio data. The second decoder is further configured to: decode the fourth audio packet to obtain fourth indication information, determine a second start time based on the fourth indication information, and control the second speaker to perform playing at the second start time based on the fourth audio data.
Based on the first aspect, in a possible implementation, the first audio packet further includes fifth indication information, and the second audio packet further includes sixth indication information. The fifth indication information and the sixth indication information indicate the audio device to perform synchronous audio data processing on the first audio data and the second audio data that are captured at a same time.
Because communication between the first headset and the audio device and communication between the second headset and the audio device are independent of each other, neither the first headset nor the second headset can learn of various information of each other, for example, information about a time at which the audio data is captured, information about a time at which the audio data starts to be played, or the like. Therefore, the first audio packet further includes fifth indication information indicating the capturing time of the first audio data, and the second audio packet further includes sixth indication information indicating the capturing time of the second audio data. Therefore, the audio device performs, based on the fifth indication information and the sixth indication information, synchronous audio data processing on the first audio data and the second audio data that are captured at the same time. In this way, audio data processing effect being reduced for some first audio data and second audio data that are captured at the same time due to processed differently (for example, treble processing needs to be performed simultaneously on the first audio data and the second audio data that are captured at the same time. Because the capturing time of the first audio data and that of the second audio data is not synchronized, the audio device performs treble processing on the first audio data while performing bass processing on the second audio data that are captured at the same time) can be avoided, so that audio data processing effect is improved.
In addition, after the audio device processes the first audio data and the second audio data, the third audio packet and the fourth audio packet that are generated include the third indication information and the fourth indication information respectively. The third indication information and the fourth indication information are set, so that the first headset and the second headset can perform synchronous audio playing, thereby improving audio playing effect.
Based on the first aspect, in a possible implementation, the audio data processing includes at least one of: noise reduction, amplification, pitch conversion, or stereo synthesis.
Based on the first aspect, in a possible implementation, the first encoder is configured to encode the first audio data based on a frame format in a communication protocol, to obtain the first audio packet. The second encoder is configured to encode the second audio data based on the frame format, to obtain the second audio packet. The frame format includes at least one of a clock field, a clock offset field, an audio packet length field, and an audio data field, and the clock field and the clock offset field indicate an audio data capturing time.
Based on the first aspect, in a possible implementation, before transmitting the first audio packet to the audio device, the first wireless transceiver is further configured to establish the first communication connection with the audio device, and perform clock calibration with the audio device over the first communication connection. Before transmitting the second audio packet to the audio device, the second wireless transceiver is further configured to establish the second communication connection with the audio device, and perform clock calibration with the audio device over the second communication connection.
Time calibration performed between the first wireless transceiver and the second wireless transceiver with the audio device help improve accuracy of time synchronization between the first wireless transceiver and the second wireless transceiver with the audio device. Therefore, the audio device can more accurately determine first audio data and second audio data that are captured at a same time. In this way, accuracy of performing audio data processing on audio data by the audio device is improved. In addition, the first headset and the second headset may further determine playing time of the third audio data and the fourth audio data more accurately, which helps improve sound quality effect of audio playing.
Based on the first aspect, in a possible implementation, the first wireless transceiver and the second wireless transceiver communicate with the audio device based on a short range communication technology.
The short range communication technology may include but are not limited to, a Bluetooth communication technology, a Wi-Fi communication technology, or the like.
According to a second aspect, an embodiment of this application provides a headset apparatus, used in any headset in a wireless headset, where the wireless headset includes a first headset and a second headset, and the first headset and the second headset are configured to capture audio data of different channels respectively, and communicate with an audio device over different communication connections. The headset apparatus includes: an encoder, configured to obtain first audio data from a microphone, and encode the first audio data to obtain a first audio packet, where the first audio data is audio data corresponding to a first channel; and a wireless transceiver, coupled to the encoder, and configured to establish a first communication connection with the audio device and transmit the first audio packet to the audio device over the first communication connection.
The headset apparatus may be an integrated circuit or a chip. When the headset apparatus is a chip, the chip may include one or more chips. The wireless headset may include a plurality of independent headset apparatuses. For example, when the headset apparatus is disposed in a pair of wireless headsets, the wireless headsets may include two independent headset apparatuses. One of the headset apparatuses is configured to implement the communication between the first headset (for example, the left headset) and the audio device. The other headset apparatus is configured to implement the communication between the second headset (for example, the right headset) and the audio device.
Based on the second aspect, in a possible implementation, the wireless transceiver is further configured to receive first indication information from the audio device, and determine, based on the first indication information, a communication period used by the wireless transceiver to transmit the first audio packet in a communication cycle.
Based on the second aspect, in a possible implementation, the communication period includes at least one first communication sub-period and at least one second communication sub-period, where the at least one first communication sub-period is used to transmit the first audio packet to the audio device; and the at least one second communication sub-period is used to retransmit the first audio packet to the audio device when the first audio packet is unsuccessfully transmitted.
Based on the second aspect, in a possible implementation, the headset apparatus further includes: a decoder; the wireless transceiver is further configured to: receive a second audio packet from the audio device, where the second audio packet is generated after the audio device performs audio data processing on the first audio data in the first audio packet; and the decoder is configured to decode the second audio packet to obtain second audio data.
Based on the second aspect, in a possible implementation, the decoder is further configured to decode the second audio packet to obtain second indication information, determine a start time based on the second indication information, and control a speaker to perform playing at the start time based on the second audio data.
Based on the second aspect, in a possible implementation, the encoder is configured to encode the first audio data based on a frame format in a communication protocol, to obtain the first audio packet. The frame format includes at least one of a clock field, a clock offset field, an audio packet length field, and an audio data field, and the clock field and the clock offset field indicate an audio data capturing time.
Based on the second aspect, in a possible implementation, the wireless transceiver communicates with the audio device based on a short range communication technology.
According to a third aspect, this application provides an audio device, where the audio device includes: a wireless transceiver, configured to receive a first audio packet from a first headset in a wireless headset over a first communication connection, and receive a second audio packet from a second headset in the wireless headset over a second communication connection; and an audio processor, configured to decode the first audio packet to obtain first audio data, decode the second audio packet to obtain second audio data, and perform audio data processing on the first audio data and the second audio data to generate third audio data and fourth audio data respectively, where the first communication connection is different from the second communication connection.
It should be noted that, the first communication connection being different from the second communication connection means that data transmission paths are different. That is, the first headset and the second headset respectively perform a data transmission with the audio device.
Based on the third aspect, in a possible implementation, the wireless transceiver is further configured to transmit first indication information to the first headset, and transmit second indication information to the second headset, where the first indication information is for indicating a first communication period in which the first headset transmits the first audio packet in a communication cycle; and the second indication information is for indicating a second communication period in which the second headset transmits the second audio packet in the communication cycle.
Based on the third aspect, in a possible implementation, the first communication period and the second communication period are time division periods in the communication cycle.
Based on the third aspect, in a possible implementation, the audio device includes a plurality of transceiver antennas. The first communication period and the second communication period are same time periods in the communication cycle.
Based on the third aspect, in a possible implementation, the first communication period includes at least one first communication sub-period and at least one second communication sub-period. The first communication sub-period is used to receive the first audio packet from the first headset, and the second communication sub-period is used to re-receive the first audio packet from the first headset when the first audio packet is unsuccessfully received.
Based on the third aspect, in a possible implementation, the second communication period includes at least one third communication sub-period and at least one fourth communication sub-period. The third communication sub-period is used to receive the second audio packet from the second headset, and the fourth communication sub-period is used to re-receive the second audio packet from the second headset when the second audio packet is unsuccessfully received.
Based on the third aspect, in a possible implementation, the audio processor is further configured to: encode the third audio data and third indication information to generate a third audio packet, where the third indication information is for indicating a first start time at which the first headset performs playing based on the third audio data; and encode the fourth audio data and fourth indication information to generate a fourth audio packet, where the fourth indication information is for indicating a second start time at which the second headset performs playing based on the fourth audio data.
Based on the third aspect, in a possible implementation, the wireless transceiver is further configured to: transmit the third audio packet to the first headset over the first communication connection; and transmit the fourth audio packet to the second headset over the second communication connection.
Based on the third aspect, in a possible implementation, the audio processor is configured to: decode the first audio packet to obtain fifth indication information; decode the second audio packet to obtain sixth indication information; and perform, based on the fifth indication information and the sixth indication information, synchronous audio data processing on the first audio data and the second audio data that are captured at a same time, to generate the third audio data and the fourth audio data.
Based on the third aspect, in a possible implementation, the audio data processing includes at least one of: noise reduction, amplification, pitch conversion, or stereo synthesis.
Based on the third aspect, in a possible implementation, the audio processor is configured to: encode the third audio data and the third indication information, based on a frame format in a communication protocol, to generate the third audio packet, and encode the fourth audio data and the fourth indication information to generate the fourth audio packet. The frame format includes at least one of a clock field, a clock offset field, an audio packet length field, and an audio data field, and the clock field and the clock offset field indicate a time at which the audio data starts to be played.
Based on the third aspect, in a possible implementation, the audio processor is further configured to: before receiving the first audio packet from the first headset, establish the first communication connection with the first headset, and perform clock calibration with the first headset over the first communication connection; and before receiving the second audio packet from the second headset, establish the second communication connection with the second headset, and perform clock calibration with the second headset over the second communication connection.
Based on the third aspect, in a possible implementation, the audio device communicates with the first headset and the second headset based on a short range communication technology.
According to a fourth aspect, this application provides an audio system, where the audio system includes: the wireless headset according to the first aspect, and the audio device according to the third aspect.
According to a fifth aspect, this application provides a wireless headset communication method, where the communication method includes: First audio data corresponding to a first channel and second audio data corresponding to a second channel are obtained through voice capturing. The first audio data and the second audio data are respectively encoded to generate a first audio packet and a second audio packet. The first audio packet is transmitted to an audio device over a first communication connection with the audio device. The second audio packet is transmitted to the audio device over a second communication connection with the audio device. The first communication connection is different from the second communication connection, and the first channel and the second channel are used to implement stereo sound effect.
Based on the fifth aspect, in a possible implementation, the communication method further includes: First indication information is received from the audio device. Based on the first indication information, a first communication period used by the first wireless transceiver to transmit the first audio packet in a communication cycle is determined. Second indication information is received from the audio device. Based on the second indication information, a second communication period used by the second wireless transceiver to transmit the second audio packet in the communication cycle is determined.
Based on the fifth aspect, in a possible implementation, the first communication period and the second communication period are time division periods in the communication cycle.
Based on the fifth aspect, in a possible implementation, the first communication period and the second communication period are same time periods in the communication cycle.
Based on the fifth aspect, in a possible implementation, the first communication period includes at least one first communication sub-period and at least one second communication sub-period. The at least one first communication sub-period is used to transmit the first audio packet to the audio device, and the at least one second communication sub-period is used to retransmit the first audio packet to the audio device when the first audio packet is unsuccessfully transmitted.
Based on the fifth aspect, in a possible implementation, the second communication period includes at least one third communication sub-period and at least one fourth communication sub-period. The third communication sub-period is used to transmit the second audio packet to the audio device, and the fourth communication sub-period is used to retransmit the second audio packet to the audio device when the third audio packet is unsuccessfully transmitted.
Based on the fifth aspect, in a possible implementation, the communication method further includes: The third audio packet is received from the audio device, where the third audio packet is generated after the audio device performs audio data processing on the first audio data in the first audio packet. The third audio packet is decoded to obtain third audio data. Playing is performed based on the third audio data. A fourth audio packet is received from the audio device, where the fourth audio packet is generated after the audio device performs audio data processing on the second audio data in the second audio packet. The fourth audio packet is decoded to obtain fourth audio data. Playing is performed based on the fourth audio data.
Based on the fifth aspect, in a possible implementation, the decoding the third audio packet to obtain third audio data, and performing playing based on the third audio data includes: The third audio packet is decoded to obtain third indication information. A first start time is determined based on the third indication information. Playing is performed at the first start time based on the third audio data. The decoding the fourth audio packet to obtain fourth audio data, and performing playing based on the fourth audio data includes: The fourth audio packet is decoded to obtain fourth indication information. A second start time is determined based on the fourth indication information. Playing is performed at the second start time based on the fourth audio data.
Based on the fifth aspect, in a possible implementation, the first audio packet further includes fifth indication information, and the second audio packet further includes sixth indication information. The fifth indication information and the sixth indication information indicate the audio device to perform synchronous audio data processing on the first audio data and the second audio data that are captured at a same time.
Based on the fifth aspect, in a possible implementation, the audio data processing includes at least one of: noise reduction, amplification, pitch conversion, or stereo synthesis.
Based on the fifth aspect, in a possible implementation, the encoding the first audio data and the second audio data respectively to generate a first audio packet and a second audio packet includes: Based on a frame format in a communication protocol, the first audio data is encoded to obtain the first audio packet, and the second audio data is encoded to obtain the second audio packet. The frame format includes a clock field, a clock offset field, an audio packet length field, and an audio data field, where the clock field and the clock offset field indicate an audio data capturing time.
Based on the fifth aspect, in a possible implementation, before the respectively transmitting the first audio packet and the second audio packet to the audio device, the method further includes: The first communication connection with the audio device is established, and clock calibration with the audio device is performed over the first communication connection. The second communication connection with the audio device is established, and clock calibration with the audio device is performed over the second communication connection.
Based on the fifth aspect, in a possible implementation, the first wireless transceiver and the second wireless transceiver communicate with the audio device based on a short range communication technology.
The short range communication technology may include but are not limited to, a Bluetooth communication technology, a Wi-Fi communication technology, or the like.
According to a sixth aspect, this application provides an audio device communication method, where the communication method includes: A first audio packet is received from a first headset in a wireless headset over a first communication connection, and a second audio packet is received from a second headset in the wireless headset over a second communication connection. The first audio packet is decoded to obtain first audio data. The second audio packet is decoded to obtain second audio data. Audio data processing is performed on the first audio data and the second audio data, to generate third audio data and fourth audio data respectively. The first communication connection is different from the second communication connection.
Based on the sixth aspect, in a possible implementation, the communication method further includes: First indication information is transmitted to the first headset, and second indication information is transmitted to the second headset. The first indication information is for indicating a first communication period in which the first headset transmits the first audio packet in a communication cycle. The second indication information is for indicating a second communication period in which the second headset transmits the second audio packet in the communication cycle.
Based on the sixth aspect, in a possible implementation, the first communication period and the second communication period are time division periods in the communication cycle.
Based on the sixth aspect, in a possible implementation, the audio device includes a plurality of transceiver antennas. The first communication period and the second communication period are same time periods in the communication cycle.
Based on the sixth aspect, in a possible implementation, the first communication period includes at least one first communication sub-period and at least one second communication sub-period. The first communication sub-period is used to receive the first audio packet from the first headset, and the second communication sub-period is used to re-receive the first audio packet from the first headset when the first audio packet is unsuccessfully received.
Based on the sixth aspect, in a possible implementation, the second communication period includes at least one third communication sub-period and at least one fourth communication sub-period. The third communication sub-period is used to receive the second audio packet from the second headset, and the fourth communication sub-period is used to re-receive the second audio packet from the second headset when the second audio packet is unsuccessfully received.
Based on the sixth aspect, in a possible implementation, the communication method further includes: The third audio data and third indication information are encoded to generate a third audio packet, where the third indication information is for indicating a first start time at which the first headset performs playing based on the third audio data. The fourth audio data and fourth indication information are encoded to generate a fourth audio packet, where the fourth indication information is for indicating a second start time at which the second headset playings based on the fourth audio data.
Based on the sixth aspect, in a possible implementation, the communication method further includes: The third audio packet is transmitted to the first headset over the first communication connection. The fourth audio packet is transmitted to the second headset over the second communication connection.
Based on the sixth aspect, in a possible implementation, the decoding the first audio packet to obtain first audio data; the decoding the second audio packet to obtain second audio data; and the performing audio data processing on the first audio data and the second audio data to generate third audio data and fourth audio data respectively includes: The first audio packet is decoded to obtain fifth indication information. The second audio packet is decoded to obtain sixth indication information. Based on the fifth indication information and the sixth indication information, synchronous audio data processing is performed on the first audio data and the second audio data that are captured at a same time, to generate the third audio data and the fourth audio data.
Based on the sixth aspect, in a possible implementation, the audio data processing includes at least one of: noise reduction, amplification, pitch conversion, or stereo synthesis.
Based on the sixth aspect, in a possible implementation, the encoding the third audio data and third indication information to generate a third audio packet, and the encoding the fourth audio data and fourth indication information to generate a fourth audio packet includes: The third audio data and the third indication information are encoded based on a frame format in a communication protocol, to generate the third audio packet. The fourth audio data and the fourth indication information are encoded to generate the fourth audio packet. The frame format includes a clock field, a clock offset field, an audio packet length field, and an audio data field, where the clock field and the clock offset field indicate a time at which the audio data starts to be played.
Based on the sixth aspect, in a possible implementation, the communication method further includes: Before receiving the first audio packet from the first headset, the first communication connection with the first headset is established, and clock calibration with the first headset is performed over the first communication connection. Before receiving the second audio packet from the second headset, the second communication connection with the second headset is established, and clock calibration with the second headset is performed over the second communication connection.
Based on the sixth aspect, in a possible implementation, the audio device communicates with the first headset and the second headset based on a short range communication technology.
It should be understood that, the technical solutions in the second aspect to the sixth aspect of this application are consistent with the technical solution in the first aspect. Beneficial effects achieved in the various aspects and corresponding feasible implementations are similar, and details are not described again.
The following clearly and describes the technical solutions in embodiments of this application with reference to the accompanying drawings in embodiments of this application. It is clear that the described embodiments are merely some rather than all of the embodiments of this application. All other embodiments obtained by a person of ordinary skill in the art based on embodiments of this application without creative efforts shall fall within the protection scope of this application.
“First” or “second” and similar terms referred herein do not indicate any order, quantity or significance, but are merely used to distinguish between different components. Similarly, “one”, “a”, and similar terms also do not indicate a quantity limitation, but indicates that there is at least one. “Coupled” and similar terms are not limited to a direct physical or mechanical connection, but may include an electrical connection, regardless of a direct or indirect connection, equivalent to a connection in a broad sense.
The term “exemplary” or “for example” in embodiments of this application means used as an example, an illustration, or a description. Any embodiment or design scheme described as an “exemplary” or “for example” in embodiments of this application should not be explained as being more preferred or having more advantages than another embodiment or design scheme. Exactly, use of the word “example”, “for example”, or the like is intended to present a related concept in a specific manner. In the description of embodiments of this application, unless otherwise stated, “a plurality of” means two or more. For example, a plurality of headset apparatuses means two or more headset apparatuses.
Refer to
The wireless headset 10 shown in this embodiment of this application may further have a function of audio data capturing.
Refer to
In this embodiment of this application, the first headset and the second headset may respectively obtain audio data, and respectively communicate with the audio device over an air interface independently. Each independent first headset and second headset may be provided with a headset apparatus. The headset apparatus may include an integrated circuit, a chip or a chipset, or a circuit board on which a chip or a chipset is mounted. The encoder 102, the decoder 104, and the wireless transceiver 103 shown in
In an actual application, a microphone may be integrated outside the first headset and the second headset in the wireless headset 10 away from the ear, as shown in
In this embodiment of this application, the audio device 20 may alternatively be a chip or a chipset, a circuit board on which a chip or a chipset is mounted, or an electronic device including the circuit board. The electronic device may include but is not limited to, a mobile phone, a tablet computer, a wearable device, an in-vehicle device, a smart TV, a notebook computer, an ultra-mobile personal computer (ultra-mobile personal computer, UMPC), a netbook, a personal digital assistant (personal digital assistant, PDA), and the like. A specific type of the electronic device is not limited in this embodiment of this application. A hardware structure of the audio device 20 is shown in
Refer to
The scenario shown in
The application scenario shown in
In the application scenario shown in
In addition, in another scenario, a headset C1 and a headset C2 may communicate with a first audio device via a short range communication technology, a headset D1 and a headset D2 may communicate with a second audio device via the short range communication technology, and the first audio device and the second audio device may communicate with each other via instant messaging technologies, as shown in
The first headset and the second headset in the wireless headset described in this embodiment of this application may respectively establish a connection with the audio device, and may respectively perform a data transmission with the audio device independently. Therefore, the first headset and the second headset in the wireless headset are decoupled. Data captured by one of the headset does not need to be transmitted to the audio device through the other headset. In this way, power consumption of two headsets in the wireless headset is balanced. In addition, both of the two headsets in the wireless headset can encode audio data and transmit the encoded audio data to the audio device. Therefore, compared with using one headset to communicate with the audio device, a transmission amount of audio data in a transmission cycle can be increased, thereby reducing a compression amount of audio data, and improving audio quality. Further, one of headsets does not need to transmit the obtained audio data to the other headset. Therefore, for a real-time audio processing scenario, a delay of data transmission between the headset and the audio device can be reduced, which helps improve user experience.
Based on the scenario shown in
Because communication between the first headset and the audio device and communication between the second headset and the audio device are independent of each other, neither the first headset nor the second headset can learn of various information of each other, for example, information about a time at which the audio data is captured, information about a time at which the audio data starts to be played, or the like. Therefore, the audio device performs, based on the first indication information and the second indication information, synchronous audio data processing on the left-channel audio data and the right-channel audio data that are captured at the same time. In this way, audio data processing effect being reduced for some left-channel audio data and right-channel audio data that are captured at the same time due to processed differently (for example, treble processing needs to be performed simultaneously on the left-channel audio data and the right-channel audio data that are captured at the same time. Because the capturing time of the left-channel audio data and the right-channel audio data is not synchronized, the audio device performs treble processing on the left-channel audio data while performing bass processing on the right-channel audio data that are captured at the same time) can be avoided, so that audio data processing effect is improved. In addition, the third indication information and the fourth indication information are set, so that the first headset and the second headset can perform synchronous audio playing, thereby improving audio playing effect.
In this embodiment of this application, the first headset and the second headset in the wireless headset 10 communicate with the audio device 20 based on a short range communication protocol. The following uses a Bluetooth protocol-based transmission scenario as an example to describe a communication method between a wireless headset 10 and an audio device 20 provided in an embodiment of this application.
In this embodiment of this application, both a first headset and a second headset in the wireless headset 10 shown in
In addition, in this embodiment of this application, the audio device 20 shown in
In a possible implementation of this embodiment of this application, before the headset A1 shown in
Time calibration performed between the first wireless transceiver and the second wireless transceiver with the audio device help improve accuracy of time synchronization between the first wireless transceiver and the second wireless transceiver with the audio device. Therefore, the audio device can more accurately determine first audio data and second audio data that are captured at a same time. In this way, accuracy of performing audio data processing on audio data by the audio device is improved. In addition, the first headset and the second headset may further determine playing time of the third audio data and the fourth audio data more accurately, which helps improve sound quality effect of audio playing.
Based on the schematic diagram of a structure of the audio system 100 shown in
In
Based on the communication interaction time sequence shown in
Step 401: A headset A1 transmits an audio packet C1 to an audio device.
In this embodiment, after the headset A1 establishes a first communication connection with the audio device by using a Bluetooth protocol, the headset A1 may encode a first left-channel audio data with a frame format shown in Table 1 to generate the audio packet C1, and transmit the audio packet C1 to the audio device in a communication period T1 shown in
After transmitting the audio packet C1 to the audio device, the headset A1 may wait for response information from the audio device. The response information indicates whether the audio device successfully receives the audio packet C1.
Step 402: The audio device transmits response information R1 to the headset A1, where the response information R1 indicates that the audio packet C1 is not successfully received.
In this embodiment, after receiving the audio packet C1 from the headset A1, the audio device may decode the audio packet C1, to determine whether the audio packet C1 is successfully received. In particular, the audio device may determine, based on a packet length in a frame format shown in Table 1, whether data is lost. When data is lost, it indicates that the audio packet C1 is not successfully received. In addition, in some other scenarios, the audio device does not receive the audio packet C1 from the headset A1 at the appointed time, which also indicates that the audio packet C1 is not successfully received. After determining that the audio packet C1 is not successfully received, the audio device transmits the foregoing response information R1 to the headset A1 in a communication period T2 shown in
Step 403: A headset A2 transmits an audio packet C2 to the audio device.
In this embodiment, after the headset A2 establishes a second communication connection with the audio device by using the Bluetooth protocol, the headset A2 may encode a first right-channel audio data with the frame format shown in Table 1 to generate the audio packet C2, and transmit the audio packet C2 to the audio device in a communication period T3 shown in
After transmitting the audio packet C2 to the audio device, the headset A2 may wait for response information from the audio device. The response information indicates whether the audio device successfully receives the audio packet C2.
Step 404: The audio device transmits response information R2 to the headset A2, where the response information R2 indicates that the audio packet C2 is successfully received.
In this embodiment, after receiving the audio packet C2 from the headset A2, the audio device may decode the audio packet C2, to determine whether the audio packet C2 is successfully received. In particular, the audio device may determine, based on the packet length in the frame format shown in Table 1, whether data is lost. When data is not lost, it indicates that the audio packet C2 is successfully received. After determining that the audio packet C2 is successfully received, the audio device transmits the foregoing response information R2 to the headset A2 in a communication period T4 shown in
Step 405: The headset A1 retransmits the audio packet C1 to the audio device based on the response information R1.
In this embodiment, when the headset A1 receives the response information R1, it indicates that the audio packet C1 is not successfully received by the audio device. It is needed to retransmit the audio packet C1 to the audio device in a communication period T5 shown in
Step 406: The audio device transmits response information R3 to the headset A1, where the response information R3 indicates that the audio packet C1 is successfully received.
In this embodiment, the audio device may transmit the response information R3 to the headset A1 in a communication period T6 shown in
Step 407: The headset A2 transmits an audio packet C3 to the audio device.
In this embodiment, the headset A2 may encode a second right-channel audio data with the frame format shown in Table 1 to generate the audio packet C3, and transmit the audio packet C3 to the audio device in a communication period T7 shown in
Step 408: The audio device transmits response information R4 to the headset A2, where the response information R4 indicates that the audio packet C3 is successfully received.
In this embodiment, the audio device may transmit the response information R4 to the headset A2 in a communication period T8 shown in
Step 409: The headset A1 transmits an audio packet C4 to the audio device.
In this embodiment, the headset A1 may encode a second left-channel audio data with the frame format shown in Table 1 to generate the audio packet C4, and transmit the audio packet C4 to the audio device in a communication period T9 shown in
Step 410: The audio device transmits response information R5 to the headset A1, where the response information R5 indicates that the audio packet C4 is successfully received.
In this embodiment, the audio device may transmit the response information R5 to the headset A1 in a communication period T10 shown in
It should be understood that the communication interaction step between the wireless headset and the audio device shown in
It can be seen from step 401 to step 410 shown in
In the embodiments of communication interaction shown in
In a possible implementation, an event that the wireless headset establishing the communication connection with the audio device and transmitting the audio packet may alternatively be triggered by the audio device. In this implementation, the audio device respectively initiates a connection request to the headset A1 and the headset A2, to respectively establish the communication connection with the headset A1 and the headset A2. After establishing the first communication connection with the headset A1, the audio device may transmit indication information to the headset A1, to indicate the headset A1 to transmit the audio packet. Similarly, after establishing the connection with the headset A2, the audio device may transmit indication information to the headset A2, to indicate the headset A2 to transmit the audio packet. In addition, the indication information transmitted by the audio device may further include whether an audio packet previously transmitted by the headset A1 or the headset A2 is successfully received, and whether the headset A1 or the headset A2 continues to transmit an audio packet. In this implementation, an interaction time sequence between the audio device and the wireless headset is shown in
Step 601: An audio device transmits indication information Z1 to a headset A1, where the indication information Z1 indicates the headset A1 to transmit an audio packet C5 to the audio device.
In this embodiment, after the audio device establishes a first communication connection with the headset A1 by using a Bluetooth protocol, the audio device may transmit the indication information Z1 to the headset A1 in a communication period T1 shown in
Step 602: The headset A1 transmits the audio packet C5 to the audio device based on the indication information Z1.
In this embodiment, the headset A1 may encode a first left-channel audio data with a frame format shown in Table 1 to generate the audio packet C5, and transmit the audio packet C5 to the audio device in a communication period T2 shown in
Step 603: The audio device transmits indication information Z2 to a headset A2, where the indication information Z2 indicates the headset A2 to transmit an audio packet C6 to the audio device.
In this embodiment, after the audio device establishes a second communication connection with the headset A2 by using the Bluetooth protocol, the audio device may transmit the indication information Z2 to the headset A2 in a communication period T3 shown in
Step 604: The headset A2 transmits the audio packet C6 to the audio device based on the indication information Z2.
In this embodiment, the headset A2 may encode a first right-channel audio data with the frame format shown in Table 1 to generate the audio packet C6, and transmit the audio packet C6 to the audio device in a communication period T4 shown in
Step 605: The audio device transmits indication information Z3 to the headset A1, where the indication information Z3 indicates that the audio packet C5 is successfully received and indicates the headset A1 to continue transmitting the audio packet.
In this embodiment, after the audio device successfully receives the audio packet C5, the audio device may transmit the indication information Z3 to the headset A1 in a communication period T5 shown in
Step 606: The headset A1 transmits an audio packet C7 to the audio device based on the indication information Z3.
In this embodiment, the headset A1 determines, based on the indication information Z3, whether to continue transmitting the audio packet to the audio device. The indication information Z3 indicates that the audio packet C5 is successfully received, and indicates the headset A1 to continue transmitting the audio packet to the audio device. In this case, the headset A1 may encode a second left-channel audio data with the frame format shown in Table 1 to generate an audio packet C7, and transmit the audio packet C7 to the audio device in a communication period T6 shown in
Step 607: The audio device transmits indication information Z4 to the headset A2, where the indication information Z4 indicates that the audio packet C6 is successfully received, and indicates the headset A2 to stop transmitting the audio packet.
In this embodiment, after successfully receiving the audio packet C6, the audio device may transmit the indication information Z4 to the headset A2 in a communication period T7 shown in
After receiving the indication information Z4 from the audio device, the headset A2 determines, based on the indication information Z4, whether to continue transmitting the audio packet to the audio device. The indication information Z4 indicates that the audio packet C6 is successfully received, and indicate that the headset A2 does not need to continue transmitting the audio packet to the audio device. In this case, the headset A2 may stop transmitting the audio packet.
Step 608: The audio device transmits indication information Z5 to the headset A1, where the indication information Z5 indicates that the audio packet C7 is successfully received and indicates the headset A1 to stop transmitting the audio packet.
In a specific implementation, after successfully receiving the audio packet C7, the audio device may transmit the indication information Z5 to the headset A1 in a communication period T9 shown in
In this embodiment, after receiving the indication information Z5 from the audio device, the headset A1 may determine whether to continue transmitting the audio packet to the audio device. The indication information Z5 indicates that the audio packet C7 is successfully received, and indicates the headset A1 to stop transmitting the audio packet to the audio device. In this case, the headset A1 may stop transmitting the audio packet.
It should be understood that the communication interaction step between the wireless headset and the audio device shown in
In this embodiment of this application, after receiving the audio packet including the left-channel audio data and the audio packet including the right-channel audio data from the headset A1 and the headset A2, the audio device may decode the audio packets, to obtain the left-channel audio data and the right-channel audio data. Then, audio data processing is performed on the left-channel audio data and the right-channel audio data. In particular, the audio device may perform, based on the Bluetooth clock information and the clock offset information in each audio packet, synchronous audio data processing on the left-channel audio data and the right-channel audio data that are captured at the same time. For example, data combination is performed on the left-channel audio data and the right-channel audio data that are captured at the same time. Then, processing such as noise elimination and tone adjustment is performed on the combined audio data, and left channel and right channel separation is performed on the processed audio data. Finally, the separated left-channel audio data and right-channel audio data are respectively encoded and transmitted to the headset A1 and the headset A2 shown in
Based on the scenario shown in
In
Based on the time sequence shown in
Step 801: The audio device transmits an audio packet C8 to a headset A1.
In this embodiment, the audio device may encode the processed left-channel audio data with a frame structure shown in Table 1 to generate the audio packet C8, and then transmit the audio packet C8 to the headset A1 in a communication period T1 shown in
Step 802: The headset A1 transmits response information R6 to the audio device, where the response information R6 indicates that the audio packet C8 is successfully received.
In this embodiment, the headset A1 may transmit the response information R6 to the audio device in a communication period T2 shown in
Step 803: The audio device transmits an audio packet C9 to a headset A2.
In this embodiment, the audio device may encode the processed right-channel audio data with the frame structure shown in Table 1 to generate the audio packet C9, and then transmit the audio packet C9 to the headset A2 in a communication period T3 shown in
Step 804: The headset A2 transmits response information R7 to the audio device, where the response information R7 indicates that the audio packet C9 is successfully received.
In this embodiment, the headset A2 may transmit the response information R7 to the audio device in a communication period T4 shown in
It should be understood that the communication interaction step between the wireless headset device and the audio device shown in
In the interaction time sequence between the audio device and the wireless headset shown in
As shown in
In the communication interaction time sequence shown in
In this embodiment of this application, after the audio device respectively establishes the connection with the headset A1 and the headset A2, the audio device may perform interaction through a plurality of complete events. For example, the headset A1 and the headset A2 may respectively encode, based on the interaction time sequence of a complete event shown in
Based on the embodiments shown in
In
It should be noted that, each software module in the headset apparatus 100 shown in
In addition, the headset apparatus 100 may further include a memory. The encoder 102 or the decoder 104 may invoke all or some computer programs stored in the memory to control and manage an action of the headset apparatus 100, for example, to support the headset apparatus 100 in performing the steps performed by the foregoing modules. The memory may be configured to support the headset apparatus 100 to store program code, data, and the like. The encoder 102 or the decoder 104 may include a programmable logic device, a transistor logic device, a discrete hardware component, or the like.
This embodiment further provides a computer readable storage medium. The computer readable storage medium stores computer instructions. The computer instructions, when being run on a computer, enable the computer to perform the related method steps, to implement audio data capturing, communication with an audio device, and audio playing in the foregoing embodiment.
This embodiment further provides a computer program product. The computer program product, when being run on a computer, enable the computer to perform the related steps, to implement audio data capturing, communication with an audio device, and audio playing in the foregoing embodiment.
The computer storage medium or the computer program product provided in the embodiments are all configured to perform the foregoing corresponding methods. Therefore, for beneficial effects that can be achieved, refer to the beneficial effects in the foregoing corresponding methods. Details are not described herein again.
The foregoing descriptions about implementations allow a person skilled in the art to understand that, for the purpose of convenient and brief description, division of the foregoing function modules is taken as an example for illustration. During actual application, the foregoing functions can be allocated to different functional modules and implemented according to a requirement, that is, an inner structure of an apparatus is divided into different functional modules to implement all or some of the functions described above.
In addition, functional units in the embodiments of this application may be integrated into one processing unit, or each of the units may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in a form of hardware, or may be implemented in a form of a software functional unit.
When the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, the integrated unit may be stored in a readable storage medium. Based on such an understanding, the technical solutions in embodiments of this application essentially, or a part that contributes to the prior art, or all or a part of the technical solutions may be embodied in a form of a software product. The software product is stored in a storage medium. Several instructions are included to enable a device (which may be a single-chip microcomputer, a chip, or the like) or a processor to perform all or some of the steps of the methods in embodiments of this application. The readable storage medium includes: any medium that can store program code, such as a USB flash drive, a removable hard disk, a read only memory (ROM), a random access memory (RAM), a magnetic disk, an optical disc, or the like.
The foregoing descriptions are merely specific implementations of this application, but are not intended to limit the protection scope of this application. Any variation or replacement readily figured out by a person skilled in the art within the technical scope disclosed in this application shall fall within the protection scope of this application. Therefore, the protection scope of this application shall be subject to the protection scope of the claims.
This application is a continuation of International Application No. PCT/CN2020/142463, filed on Dec. 31, 2020, the disclosure of which is hereby incorporated by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/CN2020/142463 | Dec 2020 | US |
Child | 18344206 | US |