The present disclosure relates generally to voice processing and more particularly to beamforming systems and methods of applying dual or multi-input noise suppression.
Mobile devices such as, but not limited to, mobile phones, smart phones, personal digital assistants (PDAs), tablets, laptops or other electronic devices, etc., increasingly include voice recognition systems to provide hands free voice control of the devices. Although voice recognition technologies have been improving, accurate voice recognition remains a technical challenge when the voice of interest is in the presence of other talkers or ambient noise. These technical challenges exist not only for voice recognition technologies, but also for voice processing such as that used in telephony which today may be performed using almost any electronic device having a suitable telephony application, notwithstanding the prevalence of mobile phones and smart phones.
A particular challenge when implementing voice transmission or voice recognition systems on mobile devices is that many types of mobile devices support use cases where the user (and therefore the user's voice) may be at different positions relative to the mobile device depending on the use case. Adding to the challenge is that various noise sources including other talkers (i.e. jammer voices) may also be located at different positions relative to the mobile device. Some of these noise sources may vary as a function of time in terms of location and magnitude. All of these factors make up the acoustic environment in which a mobile device operates and impacts the sound picked up by the mobile device microphones. Also, as the mobile device is moved or is positioned in certain ways, the acoustic environment of the mobile device also changes accordingly thereby also changing the sound picked up by the mobile device's microphones. Voice sound that may be recognized by the voice recognition system or by a listener on the receiving side of a voice transmission system under one acoustic environment may be unrecognizable under certain changed conditions due to mobile device motion, positioning, or ambient noise levels. Various other conditions in the surrounding environment can add noise, echo or cause other acoustically undesirable conditions that also adversely impact the voice recognition system or voice transmission system.
More specifically, the mobile device acoustic environment impacts the operation of signal processing components such as microphone arrays, noise suppressors, echo cancellation systems and signal conditioning that is used to improve both voice recognition and voice call performance. For mobile devices and also for stationary devices, the speaker and other jammer speakers or other noise sources may also change locations with respect to the device microphones. This also results in undesirable impacts on the acoustic environment and may result in voice being unrecognizable by the voice recognition system or a listener due to noise interference caused by the jammer speakers or other noise sources.
Briefly, a method of operation of the disclosed embodiments includes beamforming a plurality of microphone outputs to obtain a plurality of virtual microphone audio channels. Each virtual microphone audio channel corresponds to a beamform. The virtual microphone audio channels include at least one voice channel and at least one noise channel. The method includes performing voice activity detection on the at least one voice channel and adjusting a corresponding voice beamform until voice activity detection indicates that voice is present on the at least one voice channel.
The method may further include performing voice activity detection on the at least one noise channel and adjusting a corresponding noise beamform until voice activity detection indicates that voice is not substantially present on the at least one noise channel. The method may further include performing energy estimation on the at least one noise channel and adjusting a corresponding noise beamform until energy estimation indicates that the at least one noise channel is receiving audio from a dominant audio energy source. The method may further include performing voice recognition on the at least one voice channel and adjusting the corresponding voice beamform to improve a voice recognition confidence metric of the voice recognition. The method may further include performing voice recognition on the at least one noise channel and adjusting the corresponding noise beamform to decrease a voice recognition confidence metric of the voice recognition performed on the noise beam.
In some embodiments, performing voice recognition on the at least one noise channel may include performing voice recognition on the at least one noise channel using trained voice recognition that is trained to identify a specific speaker. The method may further include configuring the plurality of microphone outputs initially based on a detected orientation of a corresponding group of microphones.
Another method of operation of the disclosed embodiments includes beamforming a plurality of microphone outputs to obtain a plurality of virtual microphone audio channels, where each virtual microphone audio channel corresponds to a beamform, and with at least one voice channel and at least one noise channel. The method includes performing voice recognition on the at least one voice channel and adjusting the corresponding voice beamform to improve a voice recognition confidence metric of the voice recognition.
In some embodiments, performing voice recognition on the at least one voice channel may include performing voice recognition on the at least one voice channel using trained voice recognition that is trained to identify a specific speaker. The method may further include performing voice activity detection on the at least one noise channel and adjusting a corresponding noise beamform until voice activity detection indicates that voice is not substantially present on the at least one noise channel. The method may further include performing energy estimation on the at least one noise channel and adjusting the corresponding noise beamform until energy estimation indicates that the at least one noise channel is receiving audio from a dominant audio energy source. The method may further include performing voice activity detection on the at least one noise channel and adjusting a corresponding noise beamform until voice activity detection indicates that voice is present on the at least one noise channel. The method may further include performing voice recognition on the at least one noise channel and adjusting the corresponding noise beamform to decrease a voice recognition confidence metric of the voice recognition. The method may further include performing voice recognition on the at least one noise channel using trained voice recognition that is trained to identify a specific speaker. The method may further include performing voice recognition on the at least one noise channel in response to voice activity detection indicating that voice is present on the at least one noise channel. The method may further include adjusting the corresponding noise beamform to decrease a voice recognition confidence metric of the trained voice recognition.
The disclosed embodiments also provide an apparatus that includes a beamformer, operatively coupled to a plurality of microphone outputs. The beamformer is operative to provide, as beamformer outputs, a plurality of virtual microphone audio channels where each virtual microphone audio channel corresponds to a beamform and with at least one voice channel and at least one noise channel. A beamformer controller is operatively coupled to the beamformer and is operative to monitor the at least one voice channel and the at least one noise channel to determine if voice is present on either of the at least one voice channel or the at least one noise channel. The beamformer controller is also operative to control the beamformer to adjust a beamform corresponding to the at least one voice channel until voice is present on the at least one voice channel. In some embodiments, the beamformer controller is also operative to control the beamformer to adjust a beamform corresponding to the at least one noise channel until voice is not substantially present on the at least one noise channel.
In one embodiment, a voice activity detector is operatively coupled to the beamformer to receive the at least one voice channel, and to the beamformer controller. The beamformer controller of this embodiment is operative to monitor the at least one voice channel to determine if voice is present by monitoring input received from the voice activity detector. In another embodiment, a voice recognition engine is operatively coupled to the beamformer to receive the at least one voice channel, and to the beamformer controller. The voice recognition engine is operative to perform voice recognition on the at least one voice channel to detect voice, and the beamformer controller is operative to monitor the at least one voice channel to determine if voice is present by monitoring input received from the voice recognition engine. The input may be, for example, voice confidence metrics.
In another embodiment, a voice recognition engine is operatively coupled to the beamformer to receive the at least one voice channel and at least one noise channel. The voice recognition engine is operative to perform voice recognition on the at least one voice channel and at least one noise channel to detect voice. A beamformer controller is operatively coupled to the beamformer, to a voice activity detector and to the voice recognition engine. The beamformer controller is operative to, among other things, monitor the voice activity detector to determine if voice is present on either of the at least one voice channel or the at least one noise channel and control the beamformer to adjust a corresponding voice beamform until voice activity detection or the voice recognition engine indicates that voice is present on the at least one voice channel and adjust a corresponding noise beamform until voice activity detection or the voice recognition engine indicates that voice is not substantially present on the at least one noise channel.
In some embodiments, the apparatus may also include an energy estimator, operatively coupled to the beamformer and to the voice activity detector. In some embodiments, the apparatus may further include microphone configuration logic, operatively coupled to the beamformer. The microphone configuration logic may include switch logic that is operative to switch any microphone output of the plurality of microphone outputs on or off. In some embodiments, the apparatus may also include a noise estimator, operatively coupled to the voice activity detector.
In another embodiment, a method of operation includes beamforming a plurality of microphone outputs to obtain at least one virtual microphone channel, performing voice recognition on the at least one virtual microphone channel, and adjusting a corresponding beamform until voice recognition indicates one of the presence of voice one the at least one virtual microphone channel or that voice is not substantially present on the at least one virtual microphone channel. In some embodiments, performing voice recognition may include performing voice recognition on the at least one virtual microphone channel using trained voice recognition that is trained to identify a specific speaker.
Turning now to the drawings,
It is to be understood that
Another example is that the apparatus 100 may also include an internal communication bus, for providing operative coupling between the various components, circuitry, and devices. The terminology “operatively coupled” as used herein refers to coupling that enables operational and/or functional communication and relationships between the various components, circuitry, devices etc. described as being operatively coupled and may include any intervening items (i.e. buses, connectors, other components, circuitry, devices etc.) used to enable such communication such as, for example, internal communication buses such as data communication buses or any other intervening items that one of ordinary skill would understand to be present. Also, it is to be understood that other intervening items may be present between “operatively coupled” items even though such other intervening items are not necessary to the functional communication facilitated by the operative coupling. For example, a data communication bus may be present in various embodiments of the apparatus 100 and may provide data to several items along a pathway along which two or more items are operatively coupled, etc. Such operative coupling is shown generally in
In
The microphone configuration logic 120 may include various front end processing, such as, but not limited to, signal amplification, analog-to-digital conversion/digital audio sampling, echo cancellation, etc., which may be applied to the group of microphone 110 outputs prior to performing additional, less power efficient signal processing such as noise suppression. In some embodiments, the microphone configuration logic 120 may also include switch logic operatively coupled to the group of microphones 110 and operative to respond to control signals to individually turn each of the microphones on or off to configure the microphones in various ways. Alternatively, in some embodiments, the microphones may be turned on or off by adjusting a gain or amplifier associated with a corresponding microphone output. For example, a microphone may be turned off by reducing a gain value to zero for the corresponding microphone output. Additionally, in some embodiments, the microphone configuration logic 120 may be operative to receive control signals from other components of the apparatus 100 to adjust front end processing parameters such as, for example, amplifier gain.
The microphone configuration logic 120 is operatively coupled to beamformer 130. In some embodiments, the beamformer 130 may be implemented as a single beamformer with multiple outputs. Each output of the beamformer 130 represents a virtual microphone signal where the virtual microphone is created by beamforming the outputs from one or more physical microphones of the group of microphones 110. In the example embodiment illustrated by
In some embodiments, a device orientation detector 105 is operatively coupled to the microphone configuration logic 120 and to one or more orientation sensors 107. One example of an orientation sensor is a gyroscope, from which the device orientation detector 105 may receive sensor data over connection 106 and determine the positioning of the mobile device. For a given orientation, the device orientation detector 105 may send control signal 108 to the microphone configuration logic 120 to turn off or turn on certain microphones of the group of microphones 110. In other words, various mobile device use cases or mobile device orientations may be associated with certain microphone configurations and such microphone configurations may be triggered by actions taken on the device in conjunction with device orientations. This may be based on pre-determined configuration settings for given orientations in some embodiments, or may be based on other or additional criteria in other embodiments. For example, placing a device in a docking station may trigger engaging a pre-determined microphone configuration. In another example, placing the device in a speakerphone mode and placing the device on a tabletop or desktop may trigger another pre-determined microphone configuration. Thus in some embodiments, the device orientation detector 105, when present, may send orientation information 102 to the beamformer controller 190 such that the beamformer controller 190 may control or override such use case or orientation related settings of the microphone configuration logic 120.
The example apparatus 100 embodiment of
Two symmetrical paths exist between the respective beamformers 131 and 132 and the noise suppressor 170; one for virtual microphone voice signal 135 and one for virtual microphone noise signal 136. The two paths are symmetrical in that they each employ a respective energy estimator 141 and 142 operatively coupled to the beamformers 131 and 132, a respective voice activity detector (VAD) 151 and 152 operatively coupled to the energy estimators 141 and 142, and a noise estimator 161 and 162 operatively coupled to the VAD 151 and 152, respectively. The two noise estimators 161 and 162 are operatively coupled to the noise suppressor 170 to provide respective control signals 149 and 153. The noise estimator 162 receive control signal 143 from VAD 152. The two pathways, including all the components described above, may be considered as a “voice channel” and “noise channel.” That is, a voice signal and a noise signal are sent along the respective pathways through the various components along with control signals between components when appropriate. The voice signal or noise signal may be passed along the pathways and through some of the components without any processing or other action being taken by that component in some embodiments. The voice channel and noise channel are virtual channels that are related to a corresponding voice beamform and noise beamform. The voice beamform may be created by beamformer 131 and the noise beamform may be created by beamformer 132. The voice signal 135 may be considered a voice channel which may also be considered to be one of the virtual microphone outputs. The noise signal 136 may be considered to be noise channel which may also be considered to be another one of the virtual microphone outputs. The “virtual microphones” correspond to beamforms that may incorporate audio from one or more physical microphones of the group of microphones 110. Although
Each virtual microphone output is operatively coupled to a respective buffer 133 and 134 which may be a circular buffer to store voice data or noise data while signal examination on the pathways is taking place. That is, signal data may be stored while the signals are being examined to determine if voice is actually present or not in the signals. Thus the signal is buffered as a signal of interest so that if voice or noise is determined to be present the signal can be processed or used accordingly. For example, in some embodiments, voice and noise signals from the beamformers 130 may be buffered and sent to the voice recognition engine 180 while the beamformers 130 continue to adjust beamform patterns to improve the voice and noise signals.
For purposes of explanation, the voice signal 135 pathway will be described first in detail. The symmetrical pathway for the noise signal 136 operates in a similar manner, and any differences will be addressed below. Therefore, beginning with voice signal 135, the energy estimator 141 is operatively coupled to the buffer 133 and to VAD 151. The energy estimator 141 provides a control signal 109 to the buffer 133, a voice and control signal 119 to the VAD 151 and a control signal 111 to the beamformer controller 190. The noise signal 136 energy estimator 142 provides a control signal 113 to buffer 134. In some embodiments, the buffer 133 and buffer 134 may each be controlled by VAD 151 and VAD 152, respectively, and energy estimator 141 and energy estimator 142 may not be present. That is, in some embodiments, VAD 151 and VAD 152 are used to detect voice energy in respective beamform patterns generated by beamformers 130 rather than initially looking for unspecific audio energy as when using the energy estimators. In other embodiments, the VAD may be omitted and, instead, the voice recognition engine 180 and voice confidence metrics alone (without the VAD) may be used as an indicator of the presence of voice in signal. These operations are discussed further herein below with respect to various embodiments and various related methods of operation.
The VAD 151 is further operatively coupled to a noise estimator 161 and provides a voice and control signal 127. The VAD 151 is operatively coupled to the beamformer controller 190 and provides control signal 123 which informs the beamformer controller 190 when the VAD 151 has detected voice. The noise estimator 161 may be a signal-to-noise ratio (SNR) estimator in some embodiments, or may be some other type of noise estimator. The noise estimator 161 is operatively coupled to the beamformer controller 190 and provides control signal 145 which informs the beamformer controller 190 when noise suppression is required for the voice signal 135. In other words, control signal 145 provides information to the beamformer controller 190 which in turn controls the beamformer 131 so that the beamformer 131 may continue to scan or may adjust the beamform pattern in order to reduce some of the noise contained in the voice signal.
Each of the components VAD 151 and 152 and noise estimator 161 and 162, may all be operatively coupled to the respective buffer 133 and buffer 134, to receive buffered voice signal 118 or buffered noise signal 117, respectively. Noise suppressor 170 may be operatively coupled to both buffer 133 and buffer 134 to receive both the buffered voice signal 118 and the buffered noise signal 117. These connections are not shown in
Therefore, noise estimator 161 may receive the buffered voice signal 118 from the buffer 133 and provides control signal 145 to the beamformer controller 190, and voice and control signal 149 to noise suppressor 170. Noise estimator 161 is also operatively coupled to noise estimator 162 by control and data connection 160 such that the two noise estimators can obtain and use information from the other channel to perform various noise estimation operations in some embodiments. The noise suppressor 170 is operatively coupled to the voice recognition engine 180 to provide a noise suppressed voice signal 157, to the beamformer controller 190 to receive control signal 155, and to system memory 103 by read-write connection 173. The noise suppressor 170 may access system memory 103 to read and retrieve noise suppression algorithms, stored in noise suppression algorithms database 171, for execution by the noise suppressor 170. The beamformer controller 190 is operatively coupled to system memory 103 by a read-write connection 193 to access pre-determined beamform patterns stored in a beamform patterns database 191. The system memory 103 is a non-volatile, non-transitory memory.
The noise suppressor 170 may receive the buffered voice signal 118 from the buffer 133 and provide a noise suppressed voice signal 157 to the voice recognition engine 180 and/or to one or more voice transceivers 104 in some embodiments. In some embodiments, the voice recognition engine 180 may not be used and may not be present. That is, in some embodiments, the noise suppressed voice signal 157 may only be provided to one or more voice transceivers 104 for transmission on either by a wired or wireless telecommunications channel or over a wired or wireless network connection if a voice over Internet protocol (VoIP) system is employed by the device into which the apparatus 100 is incorporated. In embodiments having the voice recognition engine 180 present, the voice recognition engine 180 may be operatively coupled to the system control 101, which may be any type of voice controllable system control depending on the device in which the apparatus 100 is incorporated such as, but not limited to, a voice controlled dialer of a mobile telephone, a video recorder system control, an application control of a mobile telephone, smartphone, tablet, laptop, in-vehicle control system, etc., or any other type of voice controllable system control. However, the system control 101 may not be present in all embodiments. The voice recognition engine includes basic voice recognition (VR) logic 181 that recognizes human speech. In some embodiments, the voice recognition engine 180 may additionally, or alternatively, include speaker identification voice recognition logic (SI-VR) 182 which is trained to recognize specific human speech, such as the speech of a specific user.
A control signal 163, sent by the beamformer controller 190, may invoke either the VR logic 181 or the SI-VR logic 182. In response to the control signal 163 instructions, either the VR logic 181 or the SI-VR logic 182 will read either, or both of, the buffered noise signal 117 or buffered voice signal 118. The voice recognition engine 180 will provide a voice-to-text stream with corresponding voice confidence metrics on each phrase or group of words as an indication (i.e. a confidence score) to the beamformer controller 190 of the likelihood of recognized human speech, or the likelihood of a specific user's speech if the SI-VR logic 182 has been invoked. This indication is shown in
In the various embodiments, the beamformer controller 190 is operative to monitor various control signals which provide various indications of conditions on the voice signal 135 and noise signal 136. In response to the conditions, the beamformer controller 190 is operative to make adjustments to the beamformers 130 to change the beamform directivity. For example, the beamformer controller 190 attempts to adjust the beamformer 131 until the voice signal 135 is substantially the user's voice. Additionally, the beamformer controller 190 attempts to adjust the beamformer 132 until the noise signal 136 is tied to noises and sounds in the acoustic environment of the user other than the user's voice such as a jammer voice or voices or other environmental background noise.
In some embodiments, the formation of a single beamform may be sufficient in some situations. For example, by using a VAD, VR logic 181 or the SI-VR logic 182 (i.e. trained VR) to form a voice beamform channel along with using a noise suppressor may provide sufficient fidelity and de-noising for a given application or for a given acoustic environment. Also, a noise beamform channel using trained VR to substantially eliminate the user's voice and using a noise suppressor may also provide sufficient fidelity and de-noising for a given application or for a given acoustic environment.
The beamformer controller 190 is operative to configure the group of microphones 110 which may be accomplished in some embodiments by controlling the microphone configuration logic 120 to turn microphones on or off according to device orientation detected by device orientation detector 105, or other conditions. In some embodiments, the beamformer controller 190 may generate random beamforms for the voice or noise signal paths where the appropriate signal path components check the results of each. In other embodiments, the beamformer controller 190 may cause the virtual microphone beamforms to change such that the beamforms pan or scan an audio environment until desired conditions are obtained. In yet other embodiments, the beamformer controller 190 may configure the beamformers 130 using pre-determined beamform patterns stored in a beamform patterns database 191 stored in system memory 103. In yet other embodiments, beamformer 131 and beamformer 132 may be adaptive beamformers that are operative to determine the magnitude and phase coefficients needed to combine microphone outputs of the group of microphones 110 in order to steer a beam or a null in a desired direction. In the various embodiments, the beamformer controller 190 is operative to, and may, monitor control signals from any of the following components, in any combination, such as control signal 111 received from energy estimator 141, control signal 115 from energy estimator 142, control signal 123 from VAD 151, control signal 125 from VAD 152, control signal 145 from noise estimator 161 and/or control signal 147 from noise estimator 162. The beamformer controller 190 may also receive voice confidence metrics 159 from the voice recognition engine 180. The beamformer is operative to send a control signal 155 to noise suppressor 170 to invoke noise suppression under certain conditions that are described herein. In some embodiments, the beamformer controller 190 may be integrated into beamformers 130 such that beamformers 130 include all the features of the beamformer controller.
The disclosed embodiments employ VAD 151 and VAD 152 to distinguish voice activity from noise (and vice versa) and accordingly send respective control signals 123 and 125 to the beamformer controller 190. The embodiments also utilize noise estimator 161 and noise estimator 162 to determine when to enable or disable noise reduction if voice cannot be properly distinguished from the signal.
The beamformer 190 accordingly adjusts the beamform directivity of beamformer 131 and beamformer 132 based on energy levels detected by energy estimator 141 and energy estimator 142, voice activity as determined by VAD 151 or VAD 152, and the noise estimators 161 and 162. That is, if the energy level detected exceeds a threshold, the VAD looks for voice. If voice is not detected, the beamformer 190 may adjust the respective beamform pattern. If voice is detected, the noise estimator looks to determine if noise suppression is required or if the signal is sufficient as is. If noise suppression is needed, the beamformer 190 may send control signal 155 to activate the noise suppressor 170 and to perform a voice confidence metric test on the voice signal 157 by the voice recognition engine 180.
Thus, the energy estimators 141 and 142 are operative to detect deviations from a baseline that may be an indicator of voice being present in a received audio signal, or to identify if the beamformers 131 and 132 have a high sensitivity portion of their respective beamforms in a direction of a dominant energy source which may be the primary background noise. If such deviations are detected, the energy estimator 141 may send control signal 119 to activate VAD 151 to determine if voice is actually present in the received audio signal. Short-term deviations exceeding a threshold may also invoke sending control signal 109 to buffer 133 to invoke buffering the signal.
An example method of operation of the apparatus 100 may be understood in view of the flowchart of
Acoustic textbook beam-patterns for differential dual-microphone arrays include bidirectional, hyper-cardioid, and cardioid shapes, whose polar patterns have infinite depth nulls. In typical physical systems, the phase and magnitude mismatches between microphone signals are influenced by various factors such as hardware, A/D converter precision, clocking limitations etc. The physical separation distance between microphones and their surrounding structure further reduces the depth of these nulls. In typically realized broad-band signal systems, the null depth of a cardioid pattern may be as little as 10 dB, or as high as 36 dB. Therefore, if a null is directed toward the only jammer talker or noise source present, the expected attenuation of that noise source or jammer could be as least 10 to 12 dB. Note that with perfectly matched microphones and signal processing channels, the attenuation can be much higher. If there are multiple jammer talkers or noise sources oriented in multiple directions, the maximum attenuation realizable with only one steerable null will be less than this 10 to 12 dB value. In one embodiment, in order to form a noise beam, the beamformer controller (190) can steer a null at a desired voice. The desired voice will be attenuated by the aforementioned amounts, and the noise beam will thus be substantially noise. In another embodiment, in order to form a voice beam, the beamformer controller (190) can steer a null at a jammer talker source. The resulting signal will then be substantially voice, having only a small component of jammer signal, as it was attenuated by the aforementioned amount. In yet another embodiment, in the case of a diffused sound field, the beamformer controller (190) can orient a hypercardioid beamform in the direction of a desired talker, thereby forming a signal that is substantially voice due to the −6 dB random energy efficiency of the beam pattern relative to that of an omnidirectional microphone.
In operation block 205, the beamformer controller 190 adjusts at least one beam form until voice is identified on at least one voice virtual microphone signal based on verification by voice activity detection and/or voice recognition confidence metrics. In one example, VAD 151 or VAD 152 will be invoked to determine whether voice is present in the signal or not. For example, if VAD 151 does not detect voice in the signal, then VAD 151 may send control signal 123 to the beamformer controller 190 to indicate that the beamformer controller 190 should re-adapt, or in some other way continue to search for voice by changing the beamform accordingly.
In operation block 207, the beamformer controller 190 adjusts at least a second beamform until either a jammer voice or background noise is identified in at least one noise virtual microphone signal. For example, in one embodiment, VAD 152 may be used to determine whether voice is present in the noise signal 136 or not. In some embodiments, for situations where the VAD 152 detects that voice is present, the VAD 152 may send control signal 125 to beamformer controller 190 to invoke usage of the voice recognition engine 180 to further refine the voice detection. For example, the beamformer controller 190 may send control signal 163 to the voice recognition engine 180 to command the SI-VR 182 logic to analyze the buffered noise signal 117 and determine if any voice detected is that of the user. If the user's voice is detected, based on the voice confidence metrics 159 returned to the beamformer controller 190, the beamformer controller 190 may change the beamform to look for another dominant energy source (i.e. continue to search for noise). If the user's voice is not detected by the SI-VR 182 logic, then in some embodiments the voice activity detected by VAD 152 may be assumed to be jammer voices (i.e. a noise source). Also, if the voice activity detector VAD 152 does not detect voice, then the control signal 125 may indicate to the beamformer controller 190 that only background noise has been detected in the noise signal 136 and that therefore, in either of the above example scenarios the search for a noise source (with either ambient noise, jammer voices, or both) was successful.
In operation block 209, the first and second virtual microphone signals are sent to a dual input noise suppressor. Under certain conditions, the virtual microphone outputs will be sent to the noise suppressor 170. In other words, in some instances, the beamforming of the voice signal 135 may produce an adequately de-noised voice signal such that further noise suppression is not required. The noise estimators 161 and 162 make a determination of whether noise suppression is required or not. That is, the noise estimators 161 and 162 determine whether noise suppression is required for the voice recognition engine 180 to function properly, or if the user's voice will be sufficiently understood by far end listeners (because it has sufficiently little background noise). For example, if voice confidence metrics are too low for the voice signal, then the noise suppressor 170 may need to be applied. In accordance with the embodiments, the beamformed virtual microphone voice signal and the beamformed virtual microphone noise signal are therefore used as inputs to a noise suppressor. That is, once the noise signal 136 is determined to contain only background noise as was described above, or is found to contain a jammer's voice, then the noise signal 136 may be considered adequate for use as an input to the noise suppressor and the beamformer controller 190 will send control signal 155 to noise suppressor 170 to proceed with the dual input noise suppression procedures. The method of operation then ends as shown.
In operation block 309, at least one beamform is adjusted until voice is identified in at least one voice virtual microphone signal based on the voice recognition confidence metrics. In operation block 311, at least a second beamform is adjusted until a jammer voice or background noise is identified in at least one noise virtual microphone signal. In operation block 313, the first and second virtual microphone signals are sent to a dual input noise suppressor, and the method of operation ends as shown.
Further details of operation for obtaining the voice and noise microphone virtual signals and related beamforms are illustrated in
If orientation information is not available, or is not relevant for the particular device in which the apparatus 100 is incorporated, the method of operation proceeds to operation block 405. In operation block 405, some or all of the microphones, of the group of microphones 110, are combined through the beamformer 130. After the microphone configuration has been selected in either operation block 403 or operation block 405, the method of operation proceeds to decision block 407. The decision of whether noise suppression is required, in decision block 407, is based on the results of the evaluation of noise estimator 161 which evaluates the noise level on the voice signal 135 or the noise level in the user's environment of the signal-to-noise ratio of the user's speech in the user's acoustic environment. If the noise estimator 161 determines that noise suppression is not required in decision block 407, then the control signal 145 will be sent to the beamformer controller 190 to indicate that the current beamform is adequate. In some embodiments, the voice signal may therefore be used for various applications as-is without further noise suppression and the method of operation ends. However, if noise suppression is required in decision block 407, then the resulting noise and voice virtual microphone signals are sent to the noise suppressor 170 in operation block 409.
More particularly, noise estimator 161 sends voice and control signal 149 to the noise suppressor 170. The noise suppressor 170 may obtain the buffered voice signal 118 from buffer 133 and may obtain the buffered noise signal 117 from buffer 134. The noise suppressor 170 may access the system memory 103 over read-write connection 173, and obtain a pertinent noise suppressor algorithm from the noise suppressor algorithms database 171. In some embodiments, the beamformer controller 190 may send the control signal 155 to noise suppressor 170 to indicate a noise suppressor algorithm from the database of noise suppressor algorithms 171 that the noise suppressor 170 should execute.
The noise estimator 161 may check the noise suppressor 170 voice signal 157 to determine if the applied noise suppression algorithm was adequate. If the noise suppression was adequate, and if noise suppression is therefore no longer required in decision block 411, the method of operation ends. However, if noise suppression is still required in decision block 411, then the voice signal 157 may be sent to the voice recognition engine 180. In response, the voice recognition engine will send voice confidence metrics 159 to the beamformer controller 190. If the confidence scores are too low, then the beamformer controller 190 may determine that noise suppression is still required in decision block 415. If the confidence scores are sufficiently high in decision block 415, the noise suppression is no longer required and the method of operation ends. If noise suppression is still required in decision block 415, then the control signal 163 may invoke SI-VR 182 to determine if the user's voice is present in the signal. The method of operation then ends.
In some embodiments, the method of operation illustrated in
Thus, in view of the embodiments described in detail above with respect to
It is to be understood that the various components, circuitry, devices etc. described with respect to
Also, it is to be understood that the various “control signals” described herein with respect to
Additionally, operations involving the system memory 103 may be implemented using pointers where the components such as, but not limited to, the beamformer controller 190 or the noise suppressor 170, access the system memory 103 as directed by control signals which may include pointers to memory locations or database access commands that access the pre-determined beamform patterns database 191 or the database of noise suppression algorithms 171 or etc., respectively.
It is to be understood that various applications can benefit from the disclosed embodiments, in additions to devices and systems using voice recognition control. For example, the beamforming methods of operations disclosed herein may be used to determine a voice and noise signal for the purpose of identifying a user for a voice uplink channel of a mobile telephone and/or for applying dual or multi-input noise suppression for a voice uplink channel of a mobile telephone. In another example application, a stationary conference call system may incorporate the apparatuses and methods herein described. Other applications of the various disclosed embodiments will be apparent to those of ordinary skill in light of the description and various example embodiments herein described.
While various embodiments have been illustrated and described, it is to be understood that the invention is not so limited. Numerous modifications, changes, variations, substitutions and equivalents will occur to those skilled in the art without departing from the scope of the present invention as defined by the appended claims.
The present application claims priority to U.S. Provisional Patent Application No. 61/827,799, filed May 28, 2013, entitled “APPARATUS AND METHOD FOR BEAMFORMING TO OBTAIN VOICE AND NOISE SIGNALS IN A VOICE RECOGNITION SYSTEM,” and further claims priority to U.S. Provisional Patent Application No. 61/798,097, filed Mar. 15, 2013, entitled “VOICE RECOGNITION FOR A MOBILE DEVICE,” and further claims priority to U.S. Provisional Pat. App. No. 61/776,793, filed Mar. 12, 2013, entitled “VOICE RECOGNITION FOR A MOBILE DEVICE,” all of which are assigned to the same assignee as the present application, and all of which are hereby incorporated by reference herein in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
4612669 | Nossen | Sep 1986 | A |
4631543 | Brodeur | Dec 1986 | A |
4754285 | Robitaille | Jun 1988 | A |
4881123 | Chapple | Nov 1989 | A |
4884252 | Teodoridis et al. | Nov 1989 | A |
4953197 | Kaewell, Jr. et al. | Aug 1990 | A |
5267234 | Harrison | Nov 1993 | A |
5459440 | Claridge et al. | Oct 1995 | A |
5469307 | Yamada et al. | Nov 1995 | A |
5564086 | Cygan et al. | Oct 1996 | A |
5634200 | Kitakubo et al. | May 1997 | A |
5699319 | Skrivervik | Dec 1997 | A |
5757326 | Koyama et al. | May 1998 | A |
5804944 | Alberkrack et al. | Sep 1998 | A |
5862458 | Ishii | Jan 1999 | A |
6144186 | Thadiwe et al. | Nov 2000 | A |
6284849 | Almquist et al. | Sep 2001 | B1 |
6339758 | Kanazawa | Jan 2002 | B1 |
6362690 | Tichauer | Mar 2002 | B1 |
6373439 | Zurcher et al. | Apr 2002 | B1 |
6400702 | Meier | Jun 2002 | B1 |
6560444 | Imberg | May 2003 | B1 |
6594508 | Ketonen | Jul 2003 | B1 |
6674291 | Barber et al. | Jun 2004 | B1 |
6879942 | Nagase et al. | Apr 2005 | B1 |
6927555 | Johnson | Aug 2005 | B2 |
6937980 | Krasny et al. | Aug 2005 | B2 |
7019702 | Henriet et al. | Mar 2006 | B2 |
7142884 | Hagn | Nov 2006 | B2 |
7199754 | Krumm et al. | Apr 2007 | B2 |
7202734 | Raab | Apr 2007 | B1 |
7202815 | Swope et al. | Apr 2007 | B2 |
7224992 | Patino et al. | May 2007 | B2 |
7254420 | Klein | Aug 2007 | B1 |
7260366 | Lee et al. | Aug 2007 | B2 |
7359504 | Reuss | Apr 2008 | B1 |
7400907 | Jin et al. | Jul 2008 | B2 |
7433661 | Kogiantis et al. | Oct 2008 | B2 |
7436896 | Hottinen et al. | Oct 2008 | B2 |
7440731 | Staudinger et al. | Oct 2008 | B2 |
7471963 | Kim et al. | Dec 2008 | B2 |
7486931 | Cho et al. | Feb 2009 | B2 |
7504833 | Sequine | Mar 2009 | B1 |
7599420 | Forenza et al. | Oct 2009 | B2 |
7620432 | Willins et al. | Nov 2009 | B2 |
D606958 | Knoppert et al. | Dec 2009 | S |
7639660 | Kim et al. | Dec 2009 | B2 |
7643642 | Patino et al. | Jan 2010 | B2 |
7649831 | Van Rensburg et al. | Jan 2010 | B2 |
7664200 | Ariyavisitakul et al. | Feb 2010 | B2 |
7746943 | Yamaura | Jun 2010 | B2 |
7747001 | Kellermann | Jun 2010 | B2 |
7760681 | Chhabra | Jul 2010 | B1 |
7773535 | Vook et al. | Aug 2010 | B2 |
7773685 | Tirkkonen et al. | Aug 2010 | B2 |
7813696 | Kim | Oct 2010 | B2 |
7822140 | Catreux et al. | Oct 2010 | B2 |
7835711 | McFarland | Nov 2010 | B2 |
7839201 | Jacobson | Nov 2010 | B2 |
7864969 | Ma | Jan 2011 | B1 |
7885211 | Shen et al. | Feb 2011 | B2 |
7936237 | Park et al. | May 2011 | B2 |
7940740 | Krishnamurthy et al. | May 2011 | B2 |
7942936 | Golden | May 2011 | B2 |
7945229 | Wilson et al. | May 2011 | B2 |
7983722 | Lowles et al. | Jul 2011 | B2 |
8014455 | Kim et al. | Sep 2011 | B2 |
8072285 | Spears et al. | Dec 2011 | B2 |
8094011 | Faris et al. | Jan 2012 | B2 |
8095081 | Vance | Jan 2012 | B2 |
8098120 | Steeneken et al. | Jan 2012 | B2 |
8155683 | Buckley et al. | Apr 2012 | B2 |
8204446 | Scheer et al. | Jun 2012 | B2 |
8219336 | Hoebel et al. | Jul 2012 | B2 |
8219337 | Hoebel et al. | Jul 2012 | B2 |
8232685 | Perper et al. | Jul 2012 | B2 |
8233851 | Jeon et al. | Jul 2012 | B2 |
8259431 | Katta | Sep 2012 | B2 |
8275327 | Yi et al. | Sep 2012 | B2 |
8280038 | Johnson et al. | Oct 2012 | B2 |
8280323 | Thompson | Oct 2012 | B2 |
8284849 | Lee et al. | Oct 2012 | B2 |
8302183 | Sood | Oct 2012 | B2 |
8319393 | DeReus | Nov 2012 | B2 |
8373596 | Kimball et al. | Feb 2013 | B1 |
8374633 | Frank et al. | Feb 2013 | B2 |
8384695 | Lee et al. | Feb 2013 | B2 |
8428022 | Frank et al. | Apr 2013 | B2 |
8460961 | Guo et al. | Jun 2013 | B2 |
8483707 | Krishnamurthy et al. | Jul 2013 | B2 |
8509338 | Sayana et al. | Aug 2013 | B2 |
8542776 | Kim et al. | Sep 2013 | B2 |
8588426 | Xin et al. | Nov 2013 | B2 |
8594584 | Greene et al. | Nov 2013 | B2 |
8606200 | Ripley et al. | Dec 2013 | B2 |
8611829 | Alberth et al. | Dec 2013 | B2 |
8620348 | Shrivastava et al. | Dec 2013 | B2 |
8626083 | Greene et al. | Jan 2014 | B2 |
8712340 | Hoirup et al. | Apr 2014 | B2 |
8712355 | Black et al. | Apr 2014 | B2 |
8731496 | Drogi et al. | May 2014 | B2 |
8761296 | Zhang et al. | Jun 2014 | B2 |
8767722 | Kamble et al. | Jul 2014 | B2 |
8909173 | Harmke | Dec 2014 | B2 |
8989747 | Padden et al. | Mar 2015 | B2 |
9002354 | Krishnamurthy et al. | Apr 2015 | B2 |
9031523 | Anderson | May 2015 | B2 |
9197255 | Pourkhaatoun et al. | Nov 2015 | B2 |
9203489 | Sayana et al. | Dec 2015 | B2 |
9215659 | Asrani et al. | Dec 2015 | B2 |
9241050 | Asrani et al. | Jan 2016 | B1 |
9298303 | Wagner et al. | Mar 2016 | B2 |
9301177 | Ballantyne et al. | Mar 2016 | B2 |
9326320 | Hong et al. | Apr 2016 | B2 |
9344837 | Russell et al. | May 2016 | B2 |
9386542 | Russell et al. | Jul 2016 | B2 |
9401750 | Sayana et al. | Jul 2016 | B2 |
9413409 | Black et al. | Aug 2016 | B2 |
9478847 | Russell et al. | Oct 2016 | B2 |
9491007 | Black et al. | Nov 2016 | B2 |
9549290 | Smith | Jan 2017 | B2 |
9591508 | Halasz et al. | Mar 2017 | B2 |
9813262 | Klomsdorf et al. | Nov 2017 | B2 |
9979531 | Schwent et al. | May 2018 | B2 |
10020963 | Klomsdorf et al. | Jul 2018 | B2 |
20020037742 | Enderlein et al. | Mar 2002 | A1 |
20020057751 | Jagger et al. | May 2002 | A1 |
20020090974 | Hagn | Jul 2002 | A1 |
20020138254 | Isaka | Sep 2002 | A1 |
20020149351 | Kanekawa et al. | Oct 2002 | A1 |
20020193130 | Yang et al. | Dec 2002 | A1 |
20030050018 | Weissman et al. | Mar 2003 | A1 |
20030143961 | Humphreys et al. | Jul 2003 | A1 |
20030161485 | Smith | Aug 2003 | A1 |
20030222819 | Karr et al. | Dec 2003 | A1 |
20040051583 | Hellberg | Mar 2004 | A1 |
20040052314 | Copeland | Mar 2004 | A1 |
20040052317 | Copeland | Mar 2004 | A1 |
20040057530 | Tarokh et al. | Mar 2004 | A1 |
20040063439 | Glazko et al. | Apr 2004 | A1 |
20040082356 | Walton et al. | Apr 2004 | A1 |
20040106428 | Shoji | Jun 2004 | A1 |
20040148333 | Manion et al. | Jul 2004 | A1 |
20040176125 | Lee | Sep 2004 | A1 |
20040178912 | Smith et al. | Sep 2004 | A1 |
20040192398 | Zhu | Sep 2004 | A1 |
20040198392 | Harvey et al. | Oct 2004 | A1 |
20040235433 | Hugl et al. | Nov 2004 | A1 |
20040240575 | Rainbolt | Dec 2004 | A1 |
20040246048 | Leyonhjelm et al. | Dec 2004 | A1 |
20050037733 | Coleman et al. | Feb 2005 | A1 |
20050041018 | Philipp | Feb 2005 | A1 |
20050049864 | Kaltenmeier | Mar 2005 | A1 |
20050075123 | Jin et al. | Apr 2005 | A1 |
20050085195 | Tong et al. | Apr 2005 | A1 |
20050124393 | Nuovo et al. | Jun 2005 | A1 |
20050134456 | Niu et al. | Jun 2005 | A1 |
20050135324 | Kim et al. | Jun 2005 | A1 |
20050136845 | Masuoka et al. | Jun 2005 | A1 |
20050208952 | Dietrich et al. | Sep 2005 | A1 |
20050227640 | Haque et al. | Oct 2005 | A1 |
20050250532 | Hwang et al. | Nov 2005 | A1 |
20060019677 | Teague et al. | Jan 2006 | A1 |
20060052131 | Ichihara | Mar 2006 | A1 |
20060067277 | Thomas et al. | Mar 2006 | A1 |
20060077952 | Kubsch et al. | Apr 2006 | A1 |
20060099940 | Pfleging et al. | May 2006 | A1 |
20060103635 | Park | May 2006 | A1 |
20060181453 | King et al. | Aug 2006 | A1 |
20060194593 | Drabeck et al. | Aug 2006 | A1 |
20060207806 | Philipp | Sep 2006 | A1 |
20060209754 | Ji et al. | Sep 2006 | A1 |
20060215618 | Soliman et al. | Sep 2006 | A1 |
20060240827 | Dunn | Oct 2006 | A1 |
20060245601 | Michaud | Nov 2006 | A1 |
20060256887 | Kwon et al. | Nov 2006 | A1 |
20060280261 | Prikhodko et al. | Dec 2006 | A1 |
20060291393 | Teague et al. | Dec 2006 | A1 |
20060292990 | Karabinis et al. | Dec 2006 | A1 |
20070004344 | DeGroot et al. | Jan 2007 | A1 |
20070008108 | Schurig et al. | Jan 2007 | A1 |
20070026838 | Staudinger et al. | Feb 2007 | A1 |
20070042714 | Ayed | Feb 2007 | A1 |
20070049280 | Sambhwani et al. | Mar 2007 | A1 |
20070069735 | Graf et al. | Mar 2007 | A1 |
20070091004 | Puuri | Apr 2007 | A1 |
20070093281 | Park et al. | Apr 2007 | A1 |
20070133462 | Guey | Jun 2007 | A1 |
20070153743 | Mukkavilli et al. | Jul 2007 | A1 |
20070197180 | McKinzie et al. | Aug 2007 | A1 |
20070200766 | McKinzie et al. | Aug 2007 | A1 |
20070211657 | McBeath et al. | Sep 2007 | A1 |
20070211813 | Talwar et al. | Sep 2007 | A1 |
20070222629 | Yoneyama | Sep 2007 | A1 |
20070223422 | Kim et al. | Sep 2007 | A1 |
20070232370 | Kim | Oct 2007 | A1 |
20070238425 | McFarland | Oct 2007 | A1 |
20070238496 | Chung et al. | Oct 2007 | A1 |
20070243894 | Das et al. | Oct 2007 | A1 |
20070255558 | Yasunaga et al. | Nov 2007 | A1 |
20070280160 | Kim et al. | Dec 2007 | A1 |
20070285326 | McKinzie | Dec 2007 | A1 |
20080001915 | Pihlaja et al. | Jan 2008 | A1 |
20080002735 | Poirier et al. | Jan 2008 | A1 |
20080014960 | Chou | Jan 2008 | A1 |
20080026710 | Buckley | Jan 2008 | A1 |
20080059188 | Konopka | Mar 2008 | A1 |
20080080449 | Huang et al. | Apr 2008 | A1 |
20080089312 | Malladi | Apr 2008 | A1 |
20080095109 | Malladi et al. | Apr 2008 | A1 |
20080108310 | Tong et al. | May 2008 | A1 |
20080111714 | Kremin | May 2008 | A1 |
20080117886 | Kim | May 2008 | A1 |
20080130626 | Ventola et al. | Jun 2008 | A1 |
20080132247 | Anderson | Jun 2008 | A1 |
20080133462 | Aylward et al. | Jun 2008 | A1 |
20080146289 | Korneluk et al. | Jun 2008 | A1 |
20080157893 | Krah | Jul 2008 | A1 |
20080159239 | Odlyzko et al. | Jul 2008 | A1 |
20080165876 | Suh et al. | Jul 2008 | A1 |
20080167040 | Khandekar et al. | Jul 2008 | A1 |
20080167073 | Hobson et al. | Jul 2008 | A1 |
20080170602 | Guey | Jul 2008 | A1 |
20080170608 | Guey | Jul 2008 | A1 |
20080186105 | Scuderi et al. | Aug 2008 | A1 |
20080192683 | Han et al. | Aug 2008 | A1 |
20080212520 | Chen et al. | Sep 2008 | A1 |
20080225693 | Zhang et al. | Sep 2008 | A1 |
20080227414 | Karmi et al. | Sep 2008 | A1 |
20080227481 | Naguib et al. | Sep 2008 | A1 |
20080232395 | Buckley et al. | Sep 2008 | A1 |
20080267310 | Khan et al. | Oct 2008 | A1 |
20080274753 | Attar et al. | Nov 2008 | A1 |
20080279300 | Walker et al. | Nov 2008 | A1 |
20080298482 | Rensburg et al. | Dec 2008 | A1 |
20080307427 | Pi et al. | Dec 2008 | A1 |
20080309633 | Hotelling et al. | Dec 2008 | A1 |
20080312918 | Kim | Dec 2008 | A1 |
20080313146 | Wong et al. | Dec 2008 | A1 |
20080317259 | Zhang | Dec 2008 | A1 |
20090041151 | Khan et al. | Feb 2009 | A1 |
20090055170 | Nagahama | Feb 2009 | A1 |
20090059783 | Walker et al. | Mar 2009 | A1 |
20090061790 | Rofougaran | Mar 2009 | A1 |
20090061887 | Hart et al. | Mar 2009 | A1 |
20090067382 | Li et al. | Mar 2009 | A1 |
20090091551 | Hotelling et al. | Apr 2009 | A1 |
20090102294 | Hodges et al. | Apr 2009 | A1 |
20090121963 | Greene | May 2009 | A1 |
20090122758 | Smith et al. | May 2009 | A1 |
20090122884 | Vook et al. | May 2009 | A1 |
20090207836 | Kawasaki et al. | Aug 2009 | A1 |
20090228598 | Stamoulis et al. | Sep 2009 | A1 |
20090238131 | Montojo et al. | Sep 2009 | A1 |
20090243631 | Kuang | Oct 2009 | A1 |
20090252077 | Khandekar et al. | Oct 2009 | A1 |
20090256644 | Knudsen | Oct 2009 | A1 |
20090258614 | Walker | Oct 2009 | A1 |
20090262699 | Wdngerter et al. | Oct 2009 | A1 |
20090264078 | Yun et al. | Oct 2009 | A1 |
20090268675 | Choi | Oct 2009 | A1 |
20090270103 | Pani et al. | Oct 2009 | A1 |
20090276210 | Goto et al. | Nov 2009 | A1 |
20090285321 | Schulz et al. | Nov 2009 | A1 |
20090290544 | Yano et al. | Nov 2009 | A1 |
20090295226 | Hodges et al. | Dec 2009 | A1 |
20090298433 | Sorrells et al. | Dec 2009 | A1 |
20090307511 | Fiennes et al. | Dec 2009 | A1 |
20090323608 | Adachi et al. | Dec 2009 | A1 |
20100002657 | Teo et al. | Jan 2010 | A1 |
20100014690 | Wolff | Jan 2010 | A1 |
20100023898 | Nomura et al. | Jan 2010 | A1 |
20100189191 | Taoka et al. | Jan 2010 | A1 |
20100034238 | Bennett | Feb 2010 | A1 |
20100034312 | Muharemovic et al. | Feb 2010 | A1 |
20100035627 | Hou et al. | Feb 2010 | A1 |
20100046460 | Kwak et al. | Feb 2010 | A1 |
20100046650 | Jongren et al. | Feb 2010 | A1 |
20100046763 | Homma | Feb 2010 | A1 |
20100056166 | Tenny | Mar 2010 | A1 |
20100081487 | Chen | Apr 2010 | A1 |
20100085010 | Suzuki et al. | Apr 2010 | A1 |
20100092007 | Sun | Apr 2010 | A1 |
20100103949 | Jung et al. | Apr 2010 | A1 |
20100106459 | Bakalov | Apr 2010 | A1 |
20100109796 | Park et al. | May 2010 | A1 |
20100118706 | Parkvall et al. | May 2010 | A1 |
20100118839 | Malladi et al. | May 2010 | A1 |
20100128894 | Petit | May 2010 | A1 |
20100156728 | Alvey et al. | Jun 2010 | A1 |
20100157858 | Lee et al. | Jun 2010 | A1 |
20100157924 | Prasad et al. | Jun 2010 | A1 |
20100159833 | Lewis et al. | Jun 2010 | A1 |
20100161658 | Hamynen et al. | Jun 2010 | A1 |
20100165882 | Palanki et al. | Jul 2010 | A1 |
20100167743 | Palanki et al. | Jul 2010 | A1 |
20100172310 | Cheng et al. | Jul 2010 | A1 |
20100172311 | Agrawal et al. | Jul 2010 | A1 |
20100182903 | Palanki et al. | Jul 2010 | A1 |
20100195566 | Krishnamurthy et al. | Aug 2010 | A1 |
20100208838 | Lee et al. | Aug 2010 | A1 |
20100217590 | Nemer | Aug 2010 | A1 |
20100220801 | Lee et al. | Sep 2010 | A1 |
20100260154 | Frank et al. | Oct 2010 | A1 |
20100271330 | Philipp | Oct 2010 | A1 |
20100272094 | Byard et al. | Oct 2010 | A1 |
20100274516 | Hoebel et al. | Oct 2010 | A1 |
20100291918 | Suzuki et al. | Nov 2010 | A1 |
20100311437 | Palanki et al. | Dec 2010 | A1 |
20100317343 | Krishnamurthy | Dec 2010 | A1 |
20100322176 | Chen et al. | Dec 2010 | A1 |
20100323718 | Jen | Dec 2010 | A1 |
20110026722 | Jing | Feb 2011 | A1 |
20110039583 | Frank et al. | Feb 2011 | A1 |
20110051834 | Lee et al. | Mar 2011 | A1 |
20110080969 | Jongren et al. | Apr 2011 | A1 |
20110083066 | Chung et al. | Apr 2011 | A1 |
20110085588 | Zhuang | Apr 2011 | A1 |
20110085610 | Zhuang et al. | Apr 2011 | A1 |
20110096739 | Heidari et al. | Apr 2011 | A1 |
20110096915 | Nemer | Apr 2011 | A1 |
20110103498 | Chen et al. | May 2011 | A1 |
20110105023 | Scheer | May 2011 | A1 |
20110116423 | Rousu et al. | May 2011 | A1 |
20110116436 | Bachu et al. | May 2011 | A1 |
20110117925 | Sampath et al. | May 2011 | A1 |
20110119005 | Majima et al. | May 2011 | A1 |
20110121836 | Kim et al. | May 2011 | A1 |
20110143770 | Charbit et al. | Jun 2011 | A1 |
20110143773 | Kangas et al. | Jun 2011 | A1 |
20110148625 | Velusamy | Jun 2011 | A1 |
20110148700 | Lasagabaster et al. | Jun 2011 | A1 |
20110149868 | Krishnamurthy et al. | Jun 2011 | A1 |
20110149903 | Krishnamurthy et al. | Jun 2011 | A1 |
20110157067 | Wagner et al. | Jun 2011 | A1 |
20110158200 | Bachu et al. | Jun 2011 | A1 |
20110176252 | DeReus | Jul 2011 | A1 |
20110189964 | Jeon et al. | Aug 2011 | A1 |
20110190016 | Hamabe et al. | Aug 2011 | A1 |
20110216840 | Lee et al. | Sep 2011 | A1 |
20110244884 | Kangas et al. | Oct 2011 | A1 |
20110249637 | Hammarwall et al. | Oct 2011 | A1 |
20110250852 | Greene | Oct 2011 | A1 |
20110263303 | Lowles et al. | Oct 2011 | A1 |
20110268101 | Wang | Nov 2011 | A1 |
20110274188 | Sayana et al. | Nov 2011 | A1 |
20110281532 | Shin et al. | Nov 2011 | A1 |
20110285603 | Skarp | Nov 2011 | A1 |
20110286349 | Tee et al. | Nov 2011 | A1 |
20110292844 | Kwun et al. | Dec 2011 | A1 |
20110319027 | Sayana | Dec 2011 | A1 |
20120002609 | Larsson et al. | Jan 2012 | A1 |
20120008510 | Cai et al. | Jan 2012 | A1 |
20120021769 | Lindoff et al. | Jan 2012 | A1 |
20120032646 | Lee | Feb 2012 | A1 |
20120039251 | Sayana | Feb 2012 | A1 |
20120050122 | Wu et al. | Mar 2012 | A1 |
20120052903 | Han et al. | Mar 2012 | A1 |
20120071195 | Chakraborty et al. | Mar 2012 | A1 |
20120076043 | Nishio et al. | Mar 2012 | A1 |
20120077538 | Yun | Mar 2012 | A1 |
20120106475 | Jung | May 2012 | A1 |
20120112851 | Manssen et al. | May 2012 | A1 |
20120120772 | Fujisawa | May 2012 | A1 |
20120120934 | Cho | May 2012 | A1 |
20120122478 | Siomina et al. | May 2012 | A1 |
20120128175 | Visser | May 2012 | A1 |
20120158839 | Hassan et al. | Jun 2012 | A1 |
20120161927 | Pierfelice et al. | Jun 2012 | A1 |
20120162129 | Krah et al. | Jun 2012 | A1 |
20120170541 | Love et al. | Jul 2012 | A1 |
20120177089 | Pelletier et al. | Jul 2012 | A1 |
20120178370 | George | Jul 2012 | A1 |
20120182144 | Richardson et al. | Jul 2012 | A1 |
20120206556 | Yu et al. | Aug 2012 | A1 |
20120209603 | Jing | Aug 2012 | A1 |
20120214412 | Schlub et al. | Aug 2012 | A1 |
20120214421 | Hoirup et al. | Aug 2012 | A1 |
20120214549 | Philbin | Aug 2012 | A1 |
20120220243 | Mendolia | Aug 2012 | A1 |
20120224715 | Kikkeri | Sep 2012 | A1 |
20120295554 | Greene et al. | Nov 2012 | A1 |
20120295555 | Greene et al. | Nov 2012 | A1 |
20120302188 | Sahota et al. | Nov 2012 | A1 |
20120306716 | Satake et al. | Dec 2012 | A1 |
20120309388 | Moosavi et al. | Dec 2012 | A1 |
20120309413 | Grosman et al. | Dec 2012 | A1 |
20120316967 | Mgrdechian et al. | Dec 2012 | A1 |
20130013303 | Strommer | Jan 2013 | A1 |
20130030803 | Liao | Jan 2013 | A1 |
20130034241 | Pandey | Feb 2013 | A1 |
20130039284 | Marinier et al. | Feb 2013 | A1 |
20130040578 | Khoshnevis et al. | Feb 2013 | A1 |
20130059600 | Elsom-Cook et al. | Mar 2013 | A1 |
20130078980 | Saito | Mar 2013 | A1 |
20130094484 | Kneckt et al. | Apr 2013 | A1 |
20130109314 | Kneckt et al. | May 2013 | A1 |
20130109334 | Kwon et al. | May 2013 | A1 |
20130142113 | Fong et al. | Jun 2013 | A1 |
20130150092 | Frank et al. | Jun 2013 | A1 |
20130178175 | Kato | Jul 2013 | A1 |
20130194154 | Ballarda et al. | Aug 2013 | A1 |
20130195283 | Larson et al. | Aug 2013 | A1 |
20130195296 | Merks | Aug 2013 | A1 |
20130225101 | Basaran et al. | Aug 2013 | A1 |
20130226324 | Hannuksela | Aug 2013 | A1 |
20130231151 | Kneckt et al. | Sep 2013 | A1 |
20130286937 | Liu et al. | Oct 2013 | A1 |
20130300648 | Kim | Nov 2013 | A1 |
20130307735 | Contreras et al. | Nov 2013 | A1 |
20130310102 | Chao et al. | Nov 2013 | A1 |
20130316687 | Subbaramoo et al. | Nov 2013 | A1 |
20130322375 | Chang et al. | Dec 2013 | A1 |
20130322562 | Zhang et al. | Dec 2013 | A1 |
20130322655 | Schuldt | Dec 2013 | A1 |
20130325149 | Manssen et al. | Dec 2013 | A1 |
20140024321 | Zhu et al. | Jan 2014 | A1 |
20140044126 | Sabhanatarajan et al. | Feb 2014 | A1 |
20140045422 | Qi et al. | Feb 2014 | A1 |
20140068288 | Robinson et al. | Mar 2014 | A1 |
20140092830 | Chen et al. | Apr 2014 | A1 |
20140093091 | Dusan | Apr 2014 | A1 |
20140177686 | Greene et al. | Jun 2014 | A1 |
20140185498 | Schwent et al. | Jul 2014 | A1 |
20140207983 | Jones | Jul 2014 | A1 |
20140227981 | Pecen et al. | Aug 2014 | A1 |
20140273882 | Asrani et al. | Sep 2014 | A1 |
20140273886 | Black et al. | Sep 2014 | A1 |
20140313088 | Rozenblit et al. | Oct 2014 | A1 |
20140349593 | Danak et al. | Nov 2014 | A1 |
20140376652 | Sayana et al. | Dec 2014 | A1 |
20140379332 | Rodriguez | Dec 2014 | A1 |
20150017978 | Hong et al. | Jan 2015 | A1 |
20150024786 | Asrani et al. | Jan 2015 | A1 |
20150031420 | Higaki et al. | Jan 2015 | A1 |
20150072632 | Pourkhaatoun et al. | Mar 2015 | A1 |
20150080047 | Russell et al. | Mar 2015 | A1 |
20150092954 | Coker et al. | Apr 2015 | A1 |
20150171919 | Ballantyne et al. | Jun 2015 | A1 |
20150181388 | Smith | Jun 2015 | A1 |
20150236828 | Park et al. | Aug 2015 | A1 |
20150245323 | You et al. | Aug 2015 | A1 |
20150280674 | Langer et al. | Oct 2015 | A1 |
20150280675 | Langer et al. | Oct 2015 | A1 |
20150280876 | You et al. | Oct 2015 | A1 |
20150312058 | Black et al. | Oct 2015 | A1 |
20150349410 | Russell et al. | Dec 2015 | A1 |
20150365065 | Higaki et al. | Dec 2015 | A1 |
20160014727 | Nimbalker | Jan 2016 | A1 |
20160036482 | Black et al. | Feb 2016 | A1 |
20160080053 | Sayana et al. | Mar 2016 | A1 |
20180062882 | Klomsdorf et al. | Mar 2018 | A1 |
Number | Date | Country |
---|---|---|
1762137 | Apr 2006 | CN |
1859656 | Nov 2006 | CN |
1984476 | Jun 2007 | CN |
101035379 | Sep 2007 | CN |
10053205 | May 2002 | DE |
10118189 | Nov 2002 | DE |
0695059 | Jan 1996 | EP |
1158686 | Nov 2001 | EP |
1298809 | Apr 2003 | EP |
1357543 | Oct 2003 | EP |
1511010 | Mar 2005 | EP |
1753152 | Feb 2007 | EP |
1443791 | Feb 2009 | EP |
2487967 | Aug 2012 | EP |
2255443 | Nov 2012 | EP |
2557433 | Feb 2013 | EP |
2568531 | Mar 2013 | EP |
2590258 | May 2013 | EP |
H09247852 | Sep 1997 | JP |
2000286924 | Oct 2000 | JP |
20050058333 | Jun 2005 | KR |
2005113251 | Jan 2006 | RU |
WO-9306682 | Apr 1993 | WO |
WO 9416517 | Jul 1994 | WO |
WO-9600401 | Jan 1996 | WO |
WO-1999021389 | Apr 1999 | WO |
WO-1999050968 | Oct 1999 | WO |
WO-0111721 | Feb 2001 | WO |
WO-2003007508 | Jan 2003 | WO |
WO 03107327 | Dec 2003 | WO |
WO-2004021634 | Mar 2004 | WO |
WO-20040040800 | May 2004 | WO |
WO-2004084427 | Sep 2004 | WO |
WO-2004084447 | Sep 2004 | WO |
WO-2006039434 | Apr 2006 | WO |
WO-2006046192 | May 2006 | WO |
WO-2006130278 | Dec 2006 | WO |
WO-2007052115 | May 2007 | WO |
WO-2007080727 | Jul 2007 | WO |
WO-2008027705 | Mar 2008 | WO |
WO-2008033117 | Mar 2008 | WO |
WO-2008085107 | Jul 2008 | WO |
WO-2008085416 | Jul 2008 | WO |
WO-2008085720 | Jul 2008 | WO |
WO-2008112849 | Sep 2008 | WO |
WO-2008113210 | Sep 2008 | WO |
WO-2008137354 | Nov 2008 | WO |
WO-2008137607 | Nov 2008 | WO |
WO-2008156081 | Dec 2008 | WO |
WO-2009107090 | Sep 2009 | WO |
WO-2010080845 | Jul 2010 | WO |
WO-2010124244 | Oct 2010 | WO |
WO-2010138039 | Dec 2010 | WO |
WO-2012115649 | Aug 2012 | WO |
WO-2012149968 | Nov 2012 | WO |
WO-2012177939 | Dec 2012 | WO |
WO-2013131268 | Sep 2013 | WO |
Entry |
---|
US 8,224,317, 08/2012, Knoppert et al. (withdrawn) |
Li, et al. “A Subband Feedback Controlled Generalized Sidelobe Canceller in Frequency Domain with Multi-Channel Postfilter,” 2nd International Workshop on Intelligent Systems and Applications (ISA), IEEE, pp. 1-4 (May 22, 2010). |
WIPO, International Search Report, PCT Application No. PCT/US2014/014375, dated Apr. 17, 2014. (4 pages). |
Tesoriero, R. et al., “Improving location awareness in indoor spaces using RFID technology”, ScienceDirect, Expert Systems with Applications, 37 (2010) pp. 894-898. |
“Coverage enhancement for RACH messages”, 3GPP TSG-RAN WG1 Meeting #76, R1-140153, Alcatel-Lucent, Alcatel-Lucent Shanghai Bell, Feb. 2014, 5 pages. |
“Coverage Improvement for PRACH”, 3GPP TSG RAN WG1 Meeting #76—R1-140115, Intel Corporation, Feb. 2014, 9 pages. |
“International Search Report and Written Opinion”, Application No. PCT/US2015/033570, dated Oct. 19, 2015, 18 pages. |
“Non-Final Office Action”, U.S. Appl. No. 14/280,775, dated Mar. 23, 2016, 11 pages. |
“Notice of Allowance”, U.S. Appl. No. 13/873,557, dated Apr. 11, 2016, 5 pages. |
“Notice of Allowance”, U.S. Appl. No. 14/952,738, dated Mar. 28, 2016, 7 pages. |
“On the need of PDCCH for SIB, RAR and Paging”, 3GPP TSG-RAN WG1 #76-R1-140239, Feb. 2014, 4 pages. |
“Specification Impact of Enhanced Filtering for Scalable UMTS”, 3GPP TSG RAN WG1 Meeting #76, R1-140726, QUALCOMM Incorporated, Feb. 2014, 2 pages. |
“Supplemental Notice of Allowance”, U.S. Appl. No. 14/031,739, dated Apr. 21, 2016, 2 pages. |
“Written Opinion”, Application No. PCT/US2013/071616, dated Jun. 3, 2015, 9 pages. |
Yu-chun,“A New Downlink Control Channel Scheme for LTE”, Vehicular Technology Conference (VTC Spring), 2013 IEEE 77th, Jun. 2, 2013, 6 pages. |
“3rd Generation Partnership Project; Technical Specification Group Radio Access Network”, 3GPP TR 36.814 V9.0.0 (Mar. 2010), Further Advancements for E-UTRA Physical Layer Aspects (Release 9), Mar. 2010, 104 pages. |
“A feedback framework based on W2W1 for Rei. 10”, 3GPP TSG RAN WG1 #61bis, R1-103664,, Jun. 2010, 19 pages. |
“Addition of PRS Muting Configuration Information to LPPa”, 3GPP TSG RAN3 #68, Montreal, Canada; Ericsson, R3-101526, May 2010, 7 pages. |
“Advisory Action”, U.S. Appl. No. 12/650,699, dated Jan. 30, 2013, 3 pages. |
“Advisory Action”, U.S. Appl. No. 12/650,699, dated Sep. 25, 2014, 3 pages. |
“An-1432 The LM4935 Headset and Push-Button Detection Guide”, Texas Instruments Incorporated—http://www.ti.com/lit/an/snaa024a.snaa024a.pdf, May 2013, 8 pages. |
“Best Companion' reporting for improved single-cell MU-MIMO pairing”, 3GPP TSG RAN WG1 #56; Athens, Greece; Alcatei-Lucent, R1-090926, Feb. 2009, 5 pages. |
“Change Request—Clarification of the CP length of empty OFDM symbols in PRS subframes”, 3GPP TSG RAN WG1 #59bis, Jeju, Vaiencia, Spain, ST-Ericsson, Motorola, Qualcomm Inc, R1-100311;, Jan. 2009, 2 pages. |
“Change Request 36.211—Introduction of L TE Positioning”, 3GPP TSG RAN WG1 #59, Jeju, South Korea; Ericsson, R1-095027, May 2010, 6 pages. |
“Change Request 36.213 Clarification of POSCH and PRS in combination for L TE positioning”, 3GPP TSG RAN WG1 #58bis, Miyazaki, Japan; Ericsson, et al., R1-094262;, Oct. 2009, 4 pages. |
“Change Request 36.214—Introduction of LTE Positioning”, 3GPP TSG RAN WG1 #59, Jeju, South Korea, Ericsson, et al., R1-094430, Nov. 2009, 4 pages. |
“Companion Subset Based PMI/CQI Feedback for LTE-A MU-MIMO”, 3GPP TSG RAN WG1 #60; San Francisco, USA, RIM; R1-101104, Feb. 2010, 8 pages. |
“Comparison of PMI-based and SCF-based MU-MIMO”, 3GPP TSG RAN1 #58; Shenzhen, China; R1-093421,, Aug. 2009, 5 pages. |
“Development of two-stage feedback framework for Rel-10”, 3GPP TSG RAN WG1 #60bis Meeting, R1-101859, Alcatel-Lucent Shanghai Bell, Alcatel-Lucent, Apr. 2010, 5 pages. |
“Digital cellular telecommunications system (Phase 2+)”, Location Services (LCS); Broadcast Network Assistance for Enhanced Observed Time Difference (E-OTD) and Global Positioning System (GPS) Positioning Methods (3GPP TS 04.35 version 8.3.0 Release 1999), 2001, 37 pages. |
“Discussions on UE positioning issues”, 3GPP TSG-RAN WG1 #57 R1-091911, San Francisco, USA,, May 2009, 12 pages. |
“DL Codebook design for 8Tx preceding”, 3GPP TSG RAN WG1 #60bis, R1-102380, LG Electronics, Beijing, China, Apr. 2010, 4 pages. |
“Double codebook design principles”, 3GPP TSG RAN WG1 #61bis, R1-103804, Nokia, Nokia Siemens Networks, Dresden, Germany, Jun. 2010, 9 pages. |
“Earbud with Push-to-Talk Microphone”, Motorola, Inc., model 53727, iDEN 2.5 mm 4-pole mono PTT headset NNTNN5006BP, 2013, 10 pages. |
“Evaluation of protocol architecture alternatives for positioning”, 3GPP TSG-RAN WG2 #66bis R2-093855, Los Angeles, CA, USA, Jun. 2009, 4 pages. |
“Ex Parte Quayle Action”, U.S. Appl. No. 13/088,237, Dec. 19, 2012, 5 pages. |
“Extended European Search Report”, EP Application No. 12196319.3, dated Feb. 27, 2014, 7 pages. |
“Extended European Search Report”, EP Application No. 12196328.4, dated Feb. 26, 2014, 7 pages. |
“Extensions to Rel-8 type CQI/PMI/RI feedback using double codebook structure”, 3GPP TSG RAN WG1#59bis, R1-100251, Valencia, Spain,, Jan. 2010, 4 pages. |
“Feedback Codebook Design and Performance Evaluation”, 3GPP TSG RAN WG1 #61bis, R1-103970, LG Electronics, Jun. 2010, 6 pages. |
“Feedback considerations for DL MIMO and CoMP”, 3GPP TSG RAN WG1 #57bis; Los Angeles, USA; Qualcomm Europe; R1-092695, Jun. 2009, 6 pages. |
“Final Improvement Proposal for PTT Support in HFP”, Bluetooth Sig, Inc., revision V10r00 (PTTinHFP_FIPD), Jul. 20, 2010, 50 pages. |
“Final Office Action”, U.S. Appl. No. 12/407,783, dated Feb. 15, 2012, 18 pages. |
“Final Office Action”, U.S. Appl. No. 12/573,456, dated Mar. 21, 2012, 12 pages. |
“Final Office Action”, U.S. Appl. No. 12/650,699, dated Jul. 16, 2014, 20 pages. |
“Final Office Action”, U.S. Appl. No. 12/650,699, dated Jul. 29, 2015, 26 pages. |
“Final Office Action”, U.S. Appl. No. 12/650,699, dated Nov. 13, 2012, 17 pages. |
“Final Office Action”, U.S. Appl. No. 12/756,777, dated Nov. 1, 2013, 12 pages. |
“Final Office Action”, U.S. Appl. No. 12/899,211, dated Oct. 24, 2013, 17 pages. |
“Final Office Action”, U.S. Appl. No. 13/477,609, dated Jul. 31, 2015, 11 pages. |
“Final Office Action”, U.S. Appl. No. 13/721,771, dated Oct. 29, 2015, 8 pages. |
“Final Office Action”, U.S. Appl. No. 13/733,297, dated Jul. 22, 2015, 20 pages. |
“Final Office Action”, U.S. Appl. No. 13/873,557, dated Jul. 17, 2015, 13 pages. |
“Final Office Action”, U.S. Appl. No. 14/012,050, dated Jul. 6, 2015, 23 pages. |
“Final Office Action”, U.S. Appl. No. 14/052,903, dated Oct. 1, 2015, 10 pages. |
“Final Office Action”, U.S. Appl. No. 14/150,047, dated Mar. 4, 2016, 14 pages. |
“Final Office Action”, U.S. Appl. No. 14/280,775, dated Dec. 9, 2015, 13 pages. |
“Foreign Office Action”, CN Application No. 201080025882.7, dated Feb. 8, 2014, 19 pages. |
“Further details on DL OTDOA”, 3GPP TSG RAN WG1 #56bis, Seoul, South Korea—Ericsson, R1-091312,, Mar. 2009, 6 pages. |
“Further Refinements of Feedback Framework”, 3GPP TSG-RAN WG1 #60bis R1-101742; Ericsson, ST-Ericsson, Apr. 2010, 8 pages. |
“IEEE 802.16m System Description Document [Draft]”, IEEE 802.16 Broadband Wireless Access Working Group, Nokia, Feb. 7, 2009, 171 pages. |
“Implicit feedback in support of downlink MU-MIMO” Texas Instruments, 3GPP TSG RAN WG1 #58; Shenzhen, China, R1-093176, Aug. 2009, 4 pages. |
“Improving the hearability of LTE Positioning Service”, 3GPP TSG RAN WG1 #55bis; Alcatei-Lucent, R1-090053,, Jan. 2009, 5 pages. |
“Innovator in Electronics, Technical Update, Filters & Modules PRM Alignment”, Module Business Unit, Apr. 2011, 95 pages. |
“International Preliminary Report on Patentability”, Application No. PCT/US2013/042042, dated Mar. 10, 2015, 8 pages. |
“International Search Report and Written Opinion”, Application No. PCT/US2014/060440, dated Feb. 5, 2015, 11 pages. |
“International Search Report and Written Opinion”, Application No. PCT/US2015/031328, dated Aug. 12, 2015, 11 pages. |
“International Search Report and Written Opinion”, Application No. PCT/US2014/045755, dated Oct. 23, 2014, 11 pages. |
“International Search Report and Written Opinion”, Application No. PCT/US2014/045956, dated Oct. 31, 2014, 11 pages. |
“International Search Report and Written Opinion”, Application No. PCT/US2014/056642, dated Dec. 9, 2014, 11 pages. |
“International Search Report and Written Opinion”, Application No. PCT/US2013/071615, dated Mar. 5, 2014, 13 pages. |
“International Search Report and Written Opinion”, Application No. PCT/US2013/040242, dated Oct. 4, 2013, 14 pages. |
“International Search Report and Written Opinion”, Application No. PCT/US2014/047233, dated Jan. 22, 2015, 8 pages. |
“International Search Report and Written Opinion”, Application No. PCT/US2013/077919, dated Apr. 24, 2014, 8 pages. |
“International Search Report and Written Opinion”, Application No. PCT/US2014/070925, dated May 11, 2015, 9 pages. |
“International Search Report and Written Opinion”, Application No. PCT/US2014/018564, dated Jun. 18, 2014, 11 pages. |
“International Search Report and Written Opinion”, Application No. PCT/US2013/072718, dated Jun. 18, 2014, 12 pages. |
“International Search Report and Written Opinion”, Application No. PCT/US2015/027872, dated Jul. 15, 2015, 12 pages. |
“International Search Report and Written Opinion”, Application No. PCT/US2010/026579, dated Feb. 4, 2011, 13 pages. |
“International Search Report and Written Opinion”, Application No. PCT/US2011/034959, dated Aug. 16, 2011, 13 pages. |
“International Search Report and Written Opinion”, Application No. PCT/US2011/045209, dated Oct. 28, 2011, 14 pages. |
“International Search Report and Written Opinion”, Application No. PCT/US2011/039214, dated Sep. 14, 2011, 9 pages. |
“International Search Report and Written Opinion”, Application No. PCT/US2010/038257, dated Oct. 1, 2010, 9 pages. |
“International Search Report and Written Opinion”, Application No. PCT/US2010/034023, dated Dec. 1, 2010, 9 pages. |
“International Search Report”, Application No. PCT/US20013/071616, dated Mar. 5, 2014, 2 pages. |
“International Search Report”, Application No. PCT/US2010/030516, dated Oct. 8, 2010, 5 pages. |
“International Search Report”, Application No. PCT/US2010/036982, dated Nov. 22, 2010, 4 pages. |
“International Search Report”, Application No. PCT/US2010/041451, dated Oct. 25, 2010, 3 pages. |
“International Search Report”, Application No. PCT/US2011/044103, dated Oct. 24, 2011, 3 pages. |
“Introduction of L TE Positioning”, 3GPP TSG RAN WG1 #58, Shenzhen, China, R1-093604; Draft CR 36.213, Aug. 2009, 3 pages. |
“Introduction of L TE Positioning”, 3GPP TSG RAN WG1 #59, Jeju, South Korea, Ericsson et al.; R1-094429,, Nov. 2009, 5 pages. |
“Introduction of LTE Positioning”,, 3GPP TSG RAN WG1 #58, Shenzhen, China; Draft CR 36.214; R1-093605;, Aug. 2009, 6 pages. |
“Introduction of LTE Positioning”,, 3GPP TSG-RAN WG1 Meeting #58, R1-093603, Shenzhen, China,, Aug. 2009, 5 pages. |
“LS on 12 5. Assistance Information for OTDOA Positioning Support for L TE Rel-9”, 3GPP TSG RAN WG1 Meeting #58; Shenzhen, China; R1-093729, Aug. 2009, 3 pages. |
“LS on LTE measurement supporting Mobility”, 3GPP TSG WG1 #48, Tdoc R1-071250; StLouis, USA, Feb. 2007, 2 pages. |
“LTE Positioning Protocol (LPP)”, 3GPP TS 36.355 V9.0.0 (Dec. 2009); 3rd Generation Partnership Project; Technical Specification Group Radio Access Network; Evolved Universal Terrestrial Radio Access (E-UTRA); Release 9, Dec. 2009, 102 pages. |
“Market & Motivation (MRD Section3) for Interoperability Testing of Neighbor Awareness Networking”, WiFi Alliance Neighbor Awareness Networking Marketing Task Group, Version 0.14, 2011, 18 pages. |
“Marketing Statement of Work Neighbor Awareness Networking”, Version 1.17, Neighbor Awareness Networking Task Group, May 2012, 18 pages. |
“Method for Channel Quality Feedback in Wireless Communication Systems”, U.S. Appl. No. 12/823,178, filed Jun. 25, 2010, 34 pages. |
“Motorola SJYN0505A Stereo Push to Talk Headset for Nextel”, Motorola Inc., iDEN 5-pole 2.5 mm Stereo Headset SJYN05058A, 2010, 2 pages. |
“Non-Final Office Action”, U.S. Appl. No. 12/407,783, dated Sep. 9, 2013, 16 pages. |
“Non-Final Office Action”, U.S. Appl. No. 12/407,783, dated Oct. 5, 2011, 14 pages. |
“Non-Final Office Action”, U.S. Appl. No. 12/480,289, dated Jun. 9, 2011, 20 pages. |
“Non-Final Office Action”, U.S. Appl. No. 12/492,339, dated Aug. 19, 2011, 13 pages. |
“Non-Final Office Action”, U.S. Appl. No. 12/542,374, dated Feb. 24, 2014, 25 pages. |
“Non-Final Office Action”, U.S. Appl. No. 12/542,374, dated Aug. 7, 2013, 22 pages. |
“Non-Final Office Action”, U.S. Appl. No. 12/542,374, dated Aug. 31, 2012, 27 pages. |
“Non-Final Office Action”, U.S. Appl. No. 12/542,374, dated Dec. 23, 2011, 22 pages. |
“Non-Final Office Action”, U.S. Appl. No. 12/573,456, dated Nov. 18, 2011, 9 pages. |
“Non-Final Office Action”, U.S. Appl. No. 12/577,553, dated Feb. 4, 2014, 10 pages. |
“Non-Final Office Action”, U.S. Appl. No. 12/577,553, dated Aug. 12, 2013, 11 pages. |
“Non-Final Office Action”, U.S. Appl. No. 12/577,553, dated Dec. 28, 2011, 7 pages. |
“Non-Final Office Action”, U.S. Appl. No. 12/650,699, dated Mar. 30, 2015, 28 pages. |
“Non-Final Office Action”, U.S. Appl. No. 12/650,699, dated Apr. 23, 2013, 19 pages. |
“Non-Final Office Action”, U.S. Appl. No. 12/650,699, dated Jul. 19, 2012, 12 pages. |
“Non-Final Office Action”, U.S. Appl. No. 12/650,699, dated Dec. 16, 2013, 26 pages. |
“Non-Final Office Action”, U.S. Appl. No. 12/756,777, dated Apr. 19, 2013, 17 pages. |
“Non-Final Office Action”, U.S. Appl. No. 12/813,221, dated Oct. 8, 2013, 10 pages. |
“Non-Final Office Action”, U.S. Appl. No. 12/823,178, dated Aug. 23, 2012, 15 pages. |
“Non-Final Office Action”, U.S. Appl. No. 12/899,211, dated Apr. 10, 2014, 12 pages. |
“Non-Final Office Action”, U.S. Appl. No. 12/899,211, dated May 22, 2013, 17 pages. |
“Non-Final Office Action”, U.S. Appl. No. 12/973,467, dated Mar. 28, 2013, 9 pages. |
“Non-Final Office Action”, U.S. Appl. No. 13/477,609, dated Dec. 3, 2014, 7 pages. |
“Non-Final Office Action”, U.S. Appl. No. 13/477,609, dated Dec. 14, 2015, 9 pages. |
“Non-Final Office Action”, U.S. Appl. No. 13/692,520, dated Sep. 5, 2014, 15 pages. |
“Non-Final Office Action”, U.S. Appl. No. 13/692,520, dated Oct. 5, 2015, 17 pages. |
“Non-Final Office Action”, U.S. Appl. No. 13/721,771, dated May 20, 2015, 6 pages. |
“Non-Final Office Action”, U.S. Appl. No. 13/733,297, dated Feb. 2, 2016, 17 pages. |
“Non-Final Office Action”, U.S. Appl. No. 13/733,297, dated Mar. 13, 2015, 23 pages. |
“Non-Final Office Action”, U.S. Appl. No. 13/759,089, dated Apr. 18, 2013, 16 pages. |
“Non-Final Office Action”, U.S. Appl. No. 13/873,557, dated Mar. 11, 2015, 19 pages. |
“Non-Final Office Action”, U.S. Appl. No. 13/924,838, dated Nov. 28, 2014, 6 pages. |
“Non-Final Office Action”, U.S. Appl. No. 13/945,968, dated Apr. 28, 2015, 16 pages. |
“Non-Final Office Action”, U.S. Appl. No. 14/012,050, dated Feb. 10, 2015, 18 pages. |
“Non-Final Office Action”, U.S. Appl. No. 14/031,739, dated Aug. 18, 2015, 16 pages. |
“Non-Final Office Action”, U.S. Appl. No. 14/052,903, dated Mar. 11, 2015, 7 pages. |
“Non-Final Office Action”, U.S. Appl. No. 14/068,309, dated Oct. 2, 2015, 14 pages. |
“Non-Final Office Action”, U.S. Appl. No. 14/150,047, dated Jun. 29, 2015, 11 pages. |
“Non-Final Office Action”, U.S. Appl. No. 14/226,041, dated Jun. 5, 2015, 8 pages. |
“Non-Final Office Action”, U.S. Appl. No. 14/280,775, dated Jul. 16, 215, 9 pages. |
“Non-Final Office Action”, U.S. Appl. No. 14/330,317, dated Feb. 25, 2016, 14 pages. |
“Non-Final Office Action”, U.S. Appl. No. 14/339,476, dated Jan. 20, 2016, 9 pages. |
“Non-Final Office Action”, U.S. Appl. No. 14/445,715, dated Jan. 15, 2016, 26 pages. |
“Non-Final Office Action”, U.S. Appl. No. 14/952,738, dated Jan. 11, 2016, 7 pages. |
“Notice of Allowance”, U.S. Appl. No. 12/365,166, dated Apr. 16, 2010, 7 pages. |
“Notice of Allowance”, U.S. Appl. 12/365,166, dated Aug. 25, 2010, 4 pages. |
“Notice of Allowance”, U.S. Appl. No. 12/650,699, dated Jan. 14, 2016, 8 pages. |
“Notice of Allowance”, U.S. Appl. No. 13/040,090, dated Mar. 8, 2012, 6 pages. |
“Notice of Allowance”, U.S. Appl. No. 13/088,237, dated Jun. 17, 2013, 8 pages. |
“Notice of Allowance”, U.S. Appl. No. 13/088,237, dated Jul. 11, 2013, 8 pages. |
“Notice of Allowance”, U.S. Appl. No. 13/188,419, dated May 22, 2013, 8 pages. |
“Notice of Allowance”, U.S. Appl. No. 13/873,557, dated Dec. 23, 2015, 10 pages. |
“Notice of Allowance”, U.S. Appl. No. 13/924,838, dated Mar. 12, 2015, 7 pages. |
“Notice of Allowance”, U.S. Appl. No. 13/924,838, dated Jul. 8, 2015, 7 pages. |
“Notice of Allowance”, U.S. Appl. No. 13/945,968, dated Sep. 16, 2015, 6 pages. |
“Notice of Allowance”, U.S. Appl. No. 14/012,050, dated Dec. 14, 2015, 12 pages. |
“Notice of Allowance”, U.S. Appl. No. 14/031,739, dated Mar. 1, 2016, 7 pages. |
“Notice of Allowance”, U.S. Appl. No. 14/052,903, dated Feb. 1, 2016, 8 pages. |
“Notice of Allowance”, U.S. Appl. No. 14/226,041, dated Dec. 31, 2015, 5 pages. |
“Notice of Allowance”, U.S. Appl. No. 14/488,709, dated Sep. 23, 2015, 10 pages. |
“On Extensions to Rel-8 PMI Feedback”, 3GPP TSG RAN WG1 #60, R1-101129, Motorola, San Francisco, USA,, Feb. 2010, 4 pages. |
“On OTDOA in LTE”, 3GPP TSG RAN WG1 #55bis, Ljubljana, Slovenia; R1-090353, Jan. 2009, 8 pages. |
“On OTDOA method for L TE Positioning”, 3GPP TSG RAN WG1 #56, Ericsson, R1-090918, Athens, Greece, Feb. 2009, 6 pages. |
“On Serving Cell Muting for OTDOA Measurements”, 3GPP TSG RAN1 #57, R1-092628—Los Angeles, CA, USA, Jun. 2009, 7 pages. |
“Performance evaluation of adaptive codebook as enhancement of 4 Tx feedback”, 3GPP TSG RAN WG1#61bis, R1-103447, Jul. 2010, 6 pages. |
“PHY Layer 1 1 4. Specification Impact of Positioning Improvements”, 3GPP TSG RAN WG1 #56bis, Athens, Greece; Qualcomm Europe, R1-090852,, Feb. 2009, 3 pages. |
“Physical Channels and Modulation (Release 8)”, 3GPP TS 36.211 V8.6.0 (Mar. 2009) 3rd Generation Partnership Project; Technical Specification Group Radio Access 28 Network; Evolved Universal Terrestrial Radio Access (E-UTRA);, Mar. 2009, 83 pages. |
“Physical Channels and Modulation (Release 9)”, 3GPP TS 36.211 V9.0.0 (Dec. 2009); 3rd Generation Partnership Project; Technical Specification Group Radio Access Network; Evolved Universal Terrestrial Radio Access (E-UTRA); Release 9, Dec. 2009, 85 pages. |
“Physical layer procedures”, 3GPP TS 36.213 V9.0.1 (Dec. 2009); 3rd Generation Partnership Project; Technical Specification Group Radio Access Network; Evolved Universal Terrestrial Radio Access (E-UTRA); Release 9, Dec. 2009, 79 pages. |
“Positioning Subframe Muting for OTDOA Measurements”, 3GPP TSG RAN1 #58 R1-093406, Shenzhen, P. R. China, Aug. 2009, 9 pages. |
“Positioning Support for L TE”, 3GPP TSG RAN WG1 #42, Athens, Greece, RP-080995, Dec. 2008, 5 pages. |
“Pre-Brief Appeal Conference Decision”, U.S. Appl. No. 12/650,699, Apr. 9, 2013, 2 pages. |
“Rationale for mandating simulation of 4Tx Widely-Spaced Cross-Polarized Antenna Configuration for LTE-Advanced MU-MIMO”, 3GPP TSG-RAN WG1 Meeting #61bis, R1-104184, Dresden, Germany, Jun. 2010, 5 pages. |
“Reference Signals for Low Interference Subframes in Downlink;”, 3GPP TSG RAN WG1 Meeting #56bis; Seoul, South Korea; Ericsson; R1-091314, Mar. 2009, 8 pages. |
“Restriction Requirement”, U.S. Appl. No. 13/721,771, dated Mar. 16, 2015, 5 pages. |
“Restriction Requirement”, U.S. Appl. No. 14/031,739, dated Apr. 28, 2015, 7 pages. |
“Signaling Support for PRS Muting in”, 3GPP TSG RAN2 #70, Montreal, Canada; Ericsson, ST-Ericsson; R2-103102, May 2010, 2 pages. |
“Some Results on DL-MIMO Enhancements for LTE-A”, 3GPP TSG WG1 #55bis, R1-090328, Motorola; Ljubjana, Slovenia, Jan. 2009, 5 pages. |
“Sounding RS Control Signaling for Closed Loop Antenna Selection”, 3GPP TSG RAN #51, R1-080017—Mitsubishi Electric, Jan. 2008, 8 pages. |
“Study on hearability of reference signals in LTE positioning support”, 3GPP TSG RAN1 #56bisa—R1-091336, Seoul, South Korea, Mar. 2009, 8 pages. |
“Supplemental Notice of Allowance”, U.S. Appl. No. 14/488,709, dated Oct. 7, 2015, 8 pages. |
“System Simulation Results for OTDOA”, 3GPP TSG RAN WG4 #53, Jeju, South Korea, Ericsson, R4-094532;, Nov. 2009, 3 pages. |
“Technical 1 34. Specification Group Radio Access Network; Evolved Universal Terrestrial Radio Access (E-UTRA);”, 3GPP TS 36.211 v8.4.0 (Sep. 2008); 3rd Generation Partnership Project; Physical Channels and Modulation (Release 8), 2008, 78 pages. |
“Technical Specification Group Radio Access Network”, 3GPP TS 25.305 V8.1.0 (Dec. 2008) 3rd Generation Partnership Project; Stage 2 functional specification of User Equipment (UE) positioning in UTRAN (Release 8), 2008, 79 pages. |
“Technical Specification Group Radio Access Network; Evolved Universal Terrestrial Radio Access (E-UTRA)”, 3GPP TS 36.305 V0.2.0 (May 2009) 3rd generation Partnership Project; Stage 2 functional specification of User Equipment, (UE) positioning in E-UTRAN (Release 9);, 2010, 52 pages. |
“Text 1 3 0. proposal on Orthonogonal PRS transmissions in mixed CP deployments using MBSFN subframes”, 3GPP TSG RAN WG1 #59, Jeju, South Korea, Motorola, R1-095003;, Nov. 2009, 4 pages. |
“Text proposal on measurements”, 3GPP TSG RAN2 #60bis, Tdoc R2-080420; Motorola, Sevilla, Spain, Jan. 2008, 9 pages. |
“Two Component Feedback Design and Codebooks”, 3GPP TSG RAN1 #61, R1-103328, Motorola, Montreal, Canada, May 2010, 7 pages. |
“Two-Level Codebook design for MU MIMO enhancement”, 3GPP TSG RAN WG1 #60, R1-102904, Montreal, Canada, May 2010, 8 pages. |
“UTRAN SFN-SFN observed lime 11 difference measurement & 3GPP TS 25.311 IE 10.3.7.106 “UE positioning OTDOA neighbour cell info' assistance data D fields””, 3GPP TSG RAN WG4 (Radio) #20, New Jersey, USA; Tdoc R4-011408,, Nov. 2001, 4 pages. |
“View on the feedback framework for Rei. 1 0”, 3GPP TSG RAN WG1 #61, R1-103026, Samsung, Montreal, Canada, May 2010, 15 pages. |
“Views on Codebook Design for Downlink 8Tx MIMO”, 3GPP TSG RAN WG1 #60. R1-101219, San Francisco, USA, Feb. 2010, 9 pages. |
Colin,“Restrictions on Autonomous Muting to Enable 1 58. Time Difference of Arrival Measurements”, U.S. Appl. No. 61/295,678, filed Jan. 15, 2010, 26 pages. |
Costas,“A Study of a Class of Detection Waveforms Having Nearly Ideal Range-Doppler Ambiguity Properties”, Fellow, IEEE; Proceedings of the IEEE, vol. 72, No. 8, Aug. 1984, 14 pages. |
Guo,“A Series-Shunt Symmetric Swtich Makes Transmit-Receive Antennas Reconfigurable in Multipath Channels”, IEEE 3d Int'l Conf. on Digital Object Identifier, May 29, 2011, pp. 468-471. |
Jafar,“On Optimality of Beamforming for Multiple Antenna Systems with Imperfect Feedback”, Department of Electrical Engineering, Stanford University, CA, USA, 2004, 7 pages. |
Knoppert,“Communication Device”, U.S. Appl. No. 29/329,028, filed Dec. 8, 2008, 10 pages. |
Knoppert,“Indicator Shelf for Portable Electronic Device”, U.S. Appl. No. 12/480,289, filed Jun. 8, 2009, 15 pages. |
Krishnamurthy,“Interference Control, SINR Optimization and Signaling Enhancements to Improve the Performance of OTDOA Measurements”, U.S. Appl. No. 12/813,221, filed Jun. 10, 2010, 20 pages. |
Krishnamurthy,“Threshold Determination in TDOA-Based Positioning System”, U.S. Appl. No. 12/712,191, filed Feb. 24, 2010, 19 pages. |
MACCM“GaAs SP6T 2.5V High Power Switch Dual-/Tri-/Quad-Band GSM Applications”, Rev. V1 data sheet, www.macomtech.com, Mar. 22, 2003, 5 pages. |
Renesas,“uPG2417T6M GaAs Integrated Circuit SP6T Switch for NFC Application (R09DS0010EJ0100)”, Rev. 1.00 data sheet, Dec. 24, 2010, 12 pages. |
Sayana,“Method of Codebook Design and Precoder Feedback in Wireless Communication Systems”, U.S. Appl. No. 61/374,241, filed Aug. 16, 2010, 40 pages. |
Sayana,“Method of Precoder Information Feedback in Multi-Antenna Wireless Communication Systems”, U.S. Appl. No. 61/331,818, filed May 5, 2010, 43 pages. |
Valkonen,“Impedance Matching and Tuning of Non-Resonant Mobile Terminal Antennas”, Aalto University Doctoral Dissertations, Mar. 15, 2013, 94 pages. |
Visotsky,“Space—Time Transmit Precoding With Imperfect Feedback”, IEEE Transactions on Information Theory, vol. 47, No. 6, Sep. 2001, pp. 2632-2639. |
Vodafone“PDCCH Structure for MTC Enhanced Coverage”, 3GPP TSG RAN WG1 #76, R1-141030, Prague, Czech Republic, Feb. 2014, 2 pages. |
Yun,“Distributed Self-Pruning(DSP) Algorithm for Bridges in Clustered Ad Hoc Networks”, Embedded Software and Systems; Lecture Notes in Computer Science, Springer, May 14, 2007, pp. 699-707. |
Zhuang,“Method for Precoding Based on Antenna Grouping”, U.S. Appl. No. 12/899,211, filed Oct. 6, 2010, 26 pages. |
“Advisory Action”, U.S. Appl. No. 13/692,520, dated Sep. 6, 2016, 3 pages. |
“Corrected Notice of Allowance”, U.S. Appl. No. 14/339,476, dated Sep. 13, 2016, 2 pages. |
“Corrected Notice of Allowance”, U.S. Appl. No. 14/339,476, dated Sep. 30, 2016, 2 pages. |
“Non-Final Office Action”, U.S. Appl. No. 13/692,520, dated Nov. 17, 2016, 7 pages. |
“Non-Final Office Action”, U.S. Appl. No. 14/445,715, dated Oct. 20, 2016, 43 pages. |
“Notice of Allowance”, U.S. Appl. No. 13/721,771, dated Oct. 26, 2016, 5 pages. |
“Notice of Allowance”, U.S. Appl. No. 14/150,047, dated Oct. 28, 2016, 8 pages. |
“Corrected Notice of Allowance”, U.S. Appl. No. 14/031,739, dated Jun. 8, 2016, 2 pages. |
“Final Office Action”, U.S. Appl. No. 13/692,520, dated May 26, 2016, 25 pages. |
“Final Office Action”, U.S. Appl. No. 13/733,297, dated Jul. 18, 2016, 17 pages. |
“Final Office Action”, U.S. Appl. No. 14/330,317, dated Jun. 16, 2016, 15 pages. |
“Final Office Action”, U.S. Appl. No. 14/445,715, dated Jul. 8, 2016, 31 pages. |
“Non-Final Office Action”, U.S. Appl. No. 13/721,771, dated May 31, 2016, 9 pages. |
“Notice of Allowance”, U.S. Appl. No. 14/280,755, dated Jul. 15, 2016, 5 pages. |
“Notice of Allowance”, U.S. Appl. No. 14/339,476, dated Jul. 18, 2016, 11 pages. |
“Supplemental Notice of Allowance”, U.S. Appl. No. 14/952,738, dated Jun. 9, 2016, 4 pages. |
“International Preliminary Report on Patentability”, Application No. PCT/US2015/033570, dated Jan. 26, 2017, 7 pages. |
“Foreign Office Action”, EP Application No. 14705002.5, dated Feb. 16, 2017, 7 pages. |
“Corrected Notice of Allowance”, U.S. Appl. No. 13/721,771, dated Feb. 10, 2017, 2 pages. |
“Corrected Notice of Allowance”, U.S. Appl. No. 13/721,771, dated Dec. 16, 2016, 2 pages. |
“Corrected Notice of Allowance”, U.S. Appl. No. 14/150,047, dated Dec. 16, 2016, 2 pages. |
“Non-Final Office Action”, U.S. Appl. No. 13/733,297, dated Jun. 22, 2017, 19 pages. |
“Notice of Allowance”, U.S. Appl. No. 13/692,520, dated Jun. 28, 2017, 22 pages. |
“Final Office Action”, U.S. Appl. No. 13/733,297, dated Dec. 28, 2017, 21 pages. |
“Foreign Office Action”, EP Application No. 14705002.5, dated Oct. 26, 2017, 6 pages. |
“Foreign Office Action”, Chinese Application No. 201480019733.8, dated Mar. 27, 2018, 16 pages. |
“Notice of Allowance”, U.S. Appl. No. 13/733,297, dated Feb. 8, 2018, 11 pages. |
“Notice of Allowance”, U.S. Appl. No. 15/787,312, dated Mar. 28, 2018, 17 pages. |
Number | Date | Country | |
---|---|---|---|
20140278394 A1 | Sep 2014 | US |
Number | Date | Country | |
---|---|---|---|
61827799 | May 2013 | US | |
61776793 | Mar 2013 | US | |
61798097 | Mar 2013 | US |