Automatic keyword pass-through system

Information

  • Patent Grant
  • 11432065
  • Patent Number
    11,432,065
  • Date Filed
    Tuesday, February 9, 2021
    5 years ago
  • Date Issued
    Tuesday, August 30, 2022
    3 years ago
Abstract
At least one embodiment is directed to a method for automatically activating ambient sound pass-through in an earphone in response to a detected keyword in the ambient sound field of the earphone user, the steps of the method comprising at least receiving at least one ambient sound microphone (ASM) signal; receiving at least one audio content (AC) signal; and comparing the ASM signal to a keyword and if the ASM signal matches a keyword then an AC gain is created.
Description
FIELD OF THE INVENTION

The present invention relates to acoustic keyword detection and passthrough, though not exclusively, devices that can be acoustically controlled or interacted with.


BACKGROUND OF THE INVENTION

Sound isolating (SI) earphones and headsets are becoming increasingly popular for music listening and voice communication. SI earphones enable the user to hear and experience an incoming audio content signal (be it speech from a phone call or music audio from a music player) clearly in loud ambient noise environments, by attenuating the level of ambient sound in the user ear-canal.


The disadvantage of such SI earphones/headsets is that the user is acoustically detached from their local sound environment, and communication with people in their immediate environment is therefore impaired. If a second individual in the SI earphone user's ambient environment wishes to talk with the SI earphone wearer, the second individual must often shout loudly in close proximity to the SI earphone wearer, or otherwise attract the attention of said SI earphone wearer e.g. by being in visual range. Such a process can be time-consuming, dangerous or difficult in critical situations. A need therefore exists for a “hands-free” mode of operation to enable an SI earphone wearer to detect when a second individual in their environment wishes to communicate with them.


WO2007085307 describes a system for directing ambient sound through an earphone via non-electronic means via a channel, and using a switch to select whether the channel is open or closed.


Application US 2011/0206217 A1 describes a system to electronically direct ambient sound to a loudspeaker in an earphone, and to disable this ambient sound pass-through during a phone call.


US 2008/0260180 A1 describes an earphone with an ear-canal microphone and ambient sound microphone to detect user voice activity.


U.S. Pat. No. 7,672,845 B2 describes a method and system to monitor speech and detect keywords or phrases in the speech, such as for example, monitored calls in a call center or speakers/presenters using teleprompters.


But the above art does not describe a method to automatically pass-through ambient sound to an SI earphone wearer when a key word is spoken to the SI earphone wearer.





BRIEF DESCRIPTION OF THE DRAWINGS

Exemplary embodiments of present invention will become more fully understood from the detailed description and the accompanying drawings, wherein:



FIG. 1 illustrates audio hardware system;



FIG. 2 illustrates a method for mixing ambient sound microphone with audio content;



FIG. 3 illustrates a method for keyword detection to adjust audio gain; and



FIG. 4 illustrates a method for keyword detection to make a phone call.





DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS

The following description of exemplary embodiment(s) is merely illustrative in nature and is in no way intended to limit the invention, its application, or uses.


At least one embodiment is directed to a system for detecting a keyword spoken in a sound environment and alerting a user to the spoken keyword. In one embodiment as an earphone system: an earphone typically occludes the earphone user's ear, reducing the ambient sound level in the user's ear canal. Audio content signal reproduced in the earphone by a loudspeaker, e.g. incoming speech audio or music, further reduces the earphone user's ability to understand, detect or otherwise experience keywords in their environment, e.g. the earphone user's name as vocalized by someone who is in the users close proximity. At least one ambient sound microphone, e.g. located on the earphone or a mobile computing device, directs ambient sound to a key word analysis system, e.g. an automatic speech recognition system. When the key word analysis system detects a keyword, sound from an ambient sound microphone is directed to the earphone loudspeaker and (optionally) reduces the level of audio content reproduced on the earphone loudspeaker, thereby allowing the earphone wearer to hear the spoken keyword in their ambient environment, perform an action such as placing a emergency call upon the detection of a keyword, attenuating the level of or pausing reproduced music.


In another embodiment, keyword detection for mobile cell phones is enabled using the microphone resident on the phone configured to detect sound and direct the sound to a keyword detection system. Often phones are carried in pockets and other sound-attenuating locations, reducing the effectiveness of a keyword detection system when the built-in phone microphone is used. A benefit of using the ambient microphones on a pair of earphones is that of increasing signal to noise ratio (SNR). Using a pair of microphones can enhance the SNR using directional enhancement algorithms, e.g. beam-forming algorithms: improving the key word detection rate (e.g. decreasing false positives and false negatives). Another location for microphones to innervate a keyword detection system are on other body worn media devices such as glasses, heads up display or smart-watches.



FIG. 1 illustrates one exemplary embodiment of the present invention 100, there exists a communication earphone/headset system (140-150, and 110-120) connected to a voice communication device (e.g. mobile telephone, radio, computer device) and/or audio content delivery device 190 (e.g. portable media player, computer device). Said communication earphone/headset system comprises a sound isolating component 145 for blocking the users ear meatus (e.g. using foam or an expandable balloon); an Ear Canal Receiver 149 (ECR, i.e. loudspeaker) for receiving an audio signal and generating a sound field in a user ear-canal; at least one ambient sound microphone (ASM) 147 for receiving an ambient sound signal and generating at least one ASM signal; and an optional Ear Canal Microphone (ECM) 143 for receiving an ear-canal signal measured in the user's occluded ear-canal and generating an ECM signal. The earphone can be connected via wirelessly 151 (e.g., via RF or Bluetooth) or via cable 152.


At least one exemplary embodiment is directed to a signal processing system is directed to an Audio Content (AC) signal (e.g. music or speech audio signal) from the said communication device 190 (e.g. mobile phone etc.) or said audio content delivery device 160 (e.g. music player); and further receives the at least one ASM signal and the optional ECM signal. Said signal processing system mixes the at least one ASM and AC signal and transmits the resulting mixed signal to the ECR in the loudspeaker. The mixing of the at least one ASM and AC signal is controlled by voice activity of the earphone wearer. FIG. 2 illustrates a method 200 for mixing ambient sound microphone with audio content. First an ambient sound is measured by the ambient sound microphone 147 and converted into an ambient sound microphone signal 220. The ambient sound signal is sent to a voice pass through method 210 and to a signal gain amplifier 230 which adds gain to an ambient sound signal 205. Audio content 250 can be sent to a signal gain amplifier 260. The gained ambient sound signal and the gained audio content can be mixed 240 forming a mixed modified ambient sound microphone signal and a modified audio content signal 225 which is formed into a combined signal 270.


According to a preferred embodiment, the ASM signal of the earphone is directed to a Keyword Detection System (KDS). Keyword Detection is a process known to those skilled in the art and can be accomplished by various means, for example the system described by U.S. Pat. No. 7,672,845 B2. A KDS typically detects a limited number of spoken keywords (e.g. less than 20 keywords), however the number of keywords is not intended to be limitative in the present invention. In the preferred embodiment, examples of such keywords are at least one of the following keywords:


1. A first name (i.e. a “given name” or “Christian name”, e.g. “John”, “Steve”, “Yadira”), where this is the first name of the earphone wearer.


2. A surname (i.e. a second name or “family name”, e.g. “Usher”, “Goldstein”), where this is the surname of the earphone wearer.


3. A familiar or truncated form of the first name or surname (e.g. “Johnny”, “Jay”, “Stevie-poos”).


4. A nickname for the earphone wearer.


5. An emergency keyword not associated with the earphone wearer, such as “help”, “assist”, “emergency”.


In another embodiment, the ambient sound microphone is located on a mobile computing device 190, e.g. a smart phone.


In yet another embodiment, the ambient sound microphone is located on an earphone cable.


In yet another embodiment, the ambient sound microphone is located on a control box.


In yet another embodiment, the ambient sound microphone is located on a wrist-mounted computing device.


In yet another embodiment, the ambient sound microphone is located on an eye-wear system, e.g. electronic glasses used for augmented reality.


In the present invention, when at least one keyword is detected the level of the ASM signal fed to the ECR is increased. In a preferred embodiment, when voice activity is detected, the level of the AC signal fed to the ECR is also decreased.


In a preferred embodiment, following cessation of detected user voice activity, and following a “pre-fade delay” the level of the ASM signal fed to the ECR is decreased and the level of the AC signal fed to the ECR is increased. In a preferred embodiment, the time period of the “pre-fade delay” is a proportional to the time period of continuous user voice activity before cessation of the user voice activity, and the “pre-fade delay” time period is bound below an upper limit, which in a preferred embodiment is 10 seconds.


In a preferred embodiment, the location of the ASM is at the entrance to the ear meatus.


The level of ASM signal fed to the ECR is determined by an ASM gain coefficient, which in one embodiment may be frequency dependent.


The level of AC signal fed to the ECR is determined by an AC gain coefficient, which in one embodiment may be frequency dependent.


In a one embodiment, the rate of gain change (slew rate) of the ASM gain and AC gain in the mixing circuit are independently controlled and are different for “gain increasing” and “gain decreasing” conditions.


In a preferred embodiment, the slew rate for increasing and decreasing “AC gain” in the mixing circuit is approximately 5-30 dB and −5 to −30 dB per second (respectively).


In a preferred embodiment, the slew rate for increasing and decreasing “ASM gain” in the mixing circuit is inversely proportional to the AC gain (e.g., on a linear scale, the ASM gain is equal to the AC gain subtracted from unity).


In another embodiment, described in FIG. 4, a list of keywords is associated with a list of phone numbers. When a keyword is detected, the associated phone number is automatically called. In another configuration, when a prerecorded voice message may be directed to the called phone number.


Exemplary methods for detecting keywords are presented are familiar to those skilled in the art, for example U.S. Pat. No. 7,672,845 B2 describes a method and system to monitor speech and detect keywords or phrases in the speech, such as for example, monitored calls in a call center or speakers/presenters using teleprompters.



FIG. 3 illustrates at least one embodiment which is directed to a method 300 for automatically activating ambient sound pass-through in an earphone in response to a detected keyword in the ambient sound field of the earphone user, the steps of the method comprising:


Step 1 (310): Receive at least one ambient sound microphone (ASM) signal buffer and at least one audio content (AC) signal buffer.


Step 2 (320): Directing the ASM buffer to keyword detection system (KDS).


Step 3 (330): Generating an AC gain determined by the KDS. If the KDS determined a keyword is detected, then the AC gain value is decreased and is optionally increased when a keyword is not detected. The step of detecting a keyword compares the ambient sound microphone signal buffer (ASMSB) to keywords stored in computer accessible memory 335. For example, the ambient ASMSB can be parsed into temporal sections, and the spectral characteristics of the signal obtained (e.g., via FFT). A keyword's temporal characteristics and spectral characteristics can then be compared to the temporal and spectral characteristics of the ASMSB. The spectral amplitude can be normalized so that spectral values can be compared. For example, the power density at a target frequency can be used, and the power density of the ASMSB at the target frequency can modified to match the keywords. Then the patterns compared. If the temporal and/or spectral patterns match within a threshold average value (e.g., +/−3 dB) then the keyword can be identified. Note that all of the keywords can also be matched at the target frequency so that comparison of the modified spectrum of the ASMSB can be compared to all keywords. For example suppose all keywords and the ASMSB are stored as spectrograms (amplitude or power density within a frequency bin vs time) where the frequency bins are for example 100 Hz, and the temporal extend is match (for example a short keyword and a long keyword have different temporal durations, but to compare the beginning and end can be stretched or compressed into a common temporal extent, e.g. 1 sec, with 0.01 sec bins, e.g., can also be the same size as the ASMSB buffer signal length). If the target frequency is 1000 Hz-1100 Hz bin at 0.5-0.51 sec bin, then all bins can be likewise increased or decreased to the target amplitude or power density, for example 85 dB. Then the modified spectrogram of the ASMSB can be subtracted from the keyword spectrums and the absolute value sum compared against a threshold to determine if a keyword is detected. Note that various methods can be used to simplify calculations, for example ranges can be assigned integer values corresponding to the uncertainty of measurement, for example an uncertainty value of +/−2 dB, a value in the range of 93 dB to 97 dB can be assigned a value of 95 dB, etc. . . . . Additionally, all values less than a particular value say 5 dB above the average noise floor can be set to 0. Hence the spectrograms become a matrix of integers that can then be compared. The sum of absolute differences can also be amongst selected matrix cells identified as particularly identifying. Note that discussion herein is not intended to limit the method of KDS.


Step 4 (340): Generating an ASM gain determined by the KDS. If the KDS determined a keyword is detected, then the ASM gain value 390 is increased 340 or optionally is decreased 370 when a keyword is not detected.


Step 5 (345): Applying the AC gain (215, FIG. 2; 380, FIG. 3) to the received AC signal 250 (FIG. 2) to generate a modified AC signal 265.


Steps 6 (230 and 260): Applying the ASM gain (205, FIG. 2; 390, FIG. 3) to the received ASM signal 220 (FIG. 2) to generate a modified ASM signal 221 (FIG. 2).


Step 7 (225): Mixing the modified AC signal 265 and modified ASM signals 221 to generate a mixed signal 270.


Step 8: Directing the generated mixed signal of step 7 to an Ear Canal Receiver (ECR).


At least one further embodiment is further directed to where the AC gain of step 3 and the ASM gain of step 4 is limited to an upper value and optionally a lower value.


At least one further embodiment is further directed to where the received AC signal of step 1 is received via wired or wireless means from at least one of the following non-limiting devices: smart phone, telephone, radio, portable computing device, portable media player.


At least one further embodiment is further directed to where the ambient sound microphone signal is from an ambient sound microphone located on at least one of the following:


An earphone;

    • on a mobile computing device, e.g. a smart phone;
    • on an earphone cable;
    • on a control box;
    • on a wrist mounted computing device;
    • on an eye-wear system, e.g. electronic glasses used for augmented reality.


At least one further embodiment is further directed to where the keyword to be detected is one of the following spoken word types:

    • 1. A first name (i.e. a “given name” or “Christian name”, e.g. “John”, “Steve”, “Yadira”), where this is the first name of the earphone wearer.
    • 2. A surname (i.e. a second name or “family name”, e.g. “Usher”, “Smith”), where this is the surname of the earphone wearer.
    • 3. A familiar or truncated form of the first name or surname (e.g. “Johnny”, “Jay”, “Stevie-poos”).
    • 4. A nickname for the earphone wearer.
    • 5. An emergency keyword not associated with the earphone wearer, such as “help”, “assist”, “emergency”.


At least one further embodiment is further directed to where the ASM signal directed to the KDS of step 2 is from a different ambient sound microphone to the ASM signal that is processed with the ASM gain of step 6.


At least one further embodiment is further directed to where the AC gain of step 3 is frequency dependent.


At least one further embodiment is further directed to where the ASM gain of step 4 is frequency dependent.



FIG. 4 illustrates at least one further embodiment which is directed to a method 400 for automatically initiating a phone call in response to a detected keyword in the ambient sound field of a user, the steps of the method comprising:


Step 1 (410): Receive at least one ambient sound microphone (ASM) signal buffer and at least one audio content (AC) signal buffer.


Step 2 (420): Directing the ASM buffer to keyword detection system (KDS), where the KDS compares 430 keywords that are stored in processor accessible memory 440 and determines whether a keyword is detected 460 or not 450, when comparing the ASM buffer to the keywords.


Step 3 (470): Associating a list of at least one phone numbers with a list of at least one keywords, by comparing the detected keyword to processor assessable memory (470, e.g., RAM, cloud data storage, CD) that stores phones numbers associated with a keyword.


Step 4 (480): Calling the associated phone-number when a keyword is detected.


Note that various methods of keyword detection can be used and any description herein is not meant to limit embodiments to any particular type of KDS method.

Claims
  • 1. The method for modifying audio content in response to a keyword comprising: receiving an ear canal microphone (ECM) signal;receiving an ambient sound microphone (ASM) signal;receiving an audio content (AC) signal;comparing the ECM signal to a keyword and if the ECM signal matches the keyword then modifying an ASM gain and if the ECM signal does not match the keyword then the ASM signal is compared to the keyword and modifies the ASM gain if the ASM signal matches the keyword;applying the ASM gain to the ASM signal generating a modified ASM signal; andmodifying an audio content gain if the ASM gain is modified.
  • 2. The method according to claim 1 further including: sending the modified ASM signal to a speaker.
  • 3. The method according to claim 1 further including: applying the audio content gain to the audio content signal generating a modified audio content signal.
  • 4. The method according to claim 3 further including: sending the modified audio content signal to the speaker.
  • 5. The method according to claim 3 further including: mixing the modified audio content signal with the modified ASM signal forming a mixed signal.
  • 6. The method according to claim 5 further including sending the mixed signal to the speaker.
  • 7. The method according to claim 1 wherein the keyword is a user's name, wherein the audio content gain is reduced in value and the ASM gain is increased.
  • 8. The method according to claim 1 wherein the keyword is a user's name, wherein if the ASM gain is modified then the ASM gain is increased in value.
  • 9. The method according to claim 1, wherein if the ASM gain is modified then the audio content gain is decreased in value.
  • 10. The method according to claim 1 further comprising: detecting whether the keyword was uttered by a user.
  • 11. The method according to claim 10, wherein if the keyword was uttered by the user and the ASM gain is modified then the audio content gain is decreased in value, the ASM gain is increased in value.
  • 12. The method according to claim 11 further including: applying the audio content gain to the audio content signal generating a modified audio content signal.
  • 13. The method according to claim 12 further including: mixing the modified audio content signal with the modified ASM signal forming a mixed signal.
  • 14. The method according to claim 13 further including: sending the mixed signal to the speaker.
  • 15. The method according to claim 10, wherein the keyword is a voice command.
  • 16. The method according to claim 15, wherein the voice command is to call a phone number.
  • 17. The method according to claim 16, wherein the voice command includes the phone number to call.
  • 18. The method according to claim 17, further comprising: calling the phone number.
  • 19. The method for modifying audio content in response to a user input comprising: receiving an ambient sound microphone (ASM) signal;receiving an audio content (AC) signal;receiving an ear canal microphone (ECM) signal;detecting a keyword using the ECM signal and if detected then modifying the AC signal, by reducing it's volume, generating a modified AC signal;receiving a user input to adjust an ASM gain of the ASM signal;adjusting the ASM gain in response to the user input;applying the ASM gain to the ASM signal generating a modified ASM signal;mixing the modified ASM signal with the modified AC signal if the modified AC signal exists, otherwise mixing the modified ASM signal with the AC signal, to generate a modified AC signal; andsending the modified AC signal to a speaker.
  • 20. The method according to claim 19, where the user input is obtained by the user setting the amount of ambient passthrough using a GUI on a communication device.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is a continuation of and claims priority benefit to U.S. patent application Ser. No. 16/555,824, filed 29 Aug. 2019, which is a continuation of and claims priority to U.S. patent Ser. No. 16/168,752, filed 23 Oct. 2018, which is a non-provisional of and claims priority to U.S. Patent Application Ser. No. 62/575,713 filed 23 Oct. 2017, the disclosures of which are herein incorporated by reference in their entirety.

US Referenced Citations (131)
Number Name Date Kind
3876843 Moen Apr 1975 A
4054749 Suzuki et al. Oct 1977 A
4088849 Usami et al. May 1978 A
4237343 Kurtin et al. Dec 1980 A
4947440 Bateman et al. Aug 1990 A
5208867 Stites, III May 1993 A
5267321 Langberg Nov 1993 A
5524056 Killion et al. Jun 1996 A
5852804 Sako Dec 1998 A
5903868 Yuen et al. May 1999 A
6021207 Puthuff et al. Feb 2000 A
6021325 Hall Feb 2000 A
6094489 Ishige et al. Jul 2000 A
6163338 Johnson et al. Dec 2000 A
6163508 Kim et al. Dec 2000 A
6226389 Lemelson et al. May 2001 B1
6298323 Kaemmerer Oct 2001 B1
6359993 Brimhall Mar 2002 B2
6400652 Goldberg et al. Jun 2002 B1
6415034 Hietanen Jul 2002 B1
6567524 Svean et al. May 2003 B1
6618073 Lambert et al. Sep 2003 B1
RE38351 Iseberg et al. Dec 2003 E
6661901 Svean et al. Dec 2003 B1
6728385 Kvaloy et al. Apr 2004 B2
6748238 Lau Jun 2004 B1
6754359 Svean et al. Jun 2004 B1
6804638 Fiedler Oct 2004 B2
6804643 Kiss Oct 2004 B1
7072476 White et al. Jul 2006 B2
7072482 Van Doorn et al. Jul 2006 B2
7107109 Nathan et al. Sep 2006 B1
7158933 Blan Jan 2007 B2
7174022 Zhang et al. Feb 2007 B1
7209569 Boesen Apr 2007 B2
7346176 Bernardi et al. Mar 2008 B1
7430299 Armstrong et al. Sep 2008 B2
7433714 Howard et al. Oct 2008 B2
7450730 Bertg et al. Nov 2008 B2
7477756 Wickstrom et al. Jan 2009 B2
7555130 Klayman Jun 2009 B2
7562020 Le et al. Jun 2009 B2
7672845 Beranek et al. Mar 2010 B2
7756285 Sjursen et al. Jul 2010 B2
7778434 Juneau et al. Aug 2010 B2
7853031 Hamacher Dec 2010 B2
7920557 Moote Apr 2011 B2
8014553 Radivojevic et al. Sep 2011 B2
8098844 Elko Jan 2012 B2
8391501 Khawand et al. Mar 2013 B2
8401206 Seltzer et al. Mar 2013 B2
8467543 Bumett et al. Jun 2013 B2
8493204 Wong et al. Jul 2013 B2
8503704 Francart et al. Aug 2013 B2
8583428 Tashev et al. Nov 2013 B2
8600454 Nicholson Dec 2013 B2
8606571 Every et al. Dec 2013 B1
8611560 Goldstein et al. Dec 2013 B2
8625819 Goldstein et al. Jan 2014 B2
8750295 Liron Jun 2014 B2
9037458 Park et al. May 2015 B2
9123343 Kurki-Suonio Sep 2015 B2
9135797 Couper et al. Sep 2015 B2
9961435 Goyal et al. May 2018 B1
10045112 Boesen et al. Aug 2018 B2
10045117 Boesen et al. Aug 2018 B2
10063957 Boesen et al. Aug 2018 B2
10884696 Willis Jan 2021 B1
20010046304 Rast Nov 2001 A1
20020106091 Furst et al. Aug 2002 A1
20020118798 Langhart et al. Aug 2002 A1
20030035551 Light et al. Feb 2003 A1
20030161097 Le et al. Aug 2003 A1
20030165246 Kvaloy et al. Sep 2003 A1
20040042103 Mayer Mar 2004 A1
20040109668 Stuckman Jun 2004 A1
20040125965 Alberth, Jr. et al. Jul 2004 A1
20040190737 Kuhnel et al. Sep 2004 A1
20040196992 Ryan Oct 2004 A1
20040203351 Shearer et al. Oct 2004 A1
20050058313 Victorian et al. Mar 2005 A1
20050078838 Simon Apr 2005 A1
20050123146 Voix et al. Jun 2005 A1
20050175189 Lee et al. Aug 2005 A1
20050288057 Lai et al. Dec 2005 A1
20060067551 Cartwright et al. Mar 2006 A1
20060074693 Yamashita Apr 2006 A1
20060083395 Allen et al. Apr 2006 A1
20060092043 Lagassey May 2006 A1
20060133621 Chen et al. Jun 2006 A1
20060195322 Broussard et al. Aug 2006 A1
20060204014 Isenberg et al. Sep 2006 A1
20070021958 Visser et al. Jan 2007 A1
20070033029 Sakawaki Feb 2007 A1
20070043563 Comerford et al. Feb 2007 A1
20070049361 Coote et al. Mar 2007 A1
20070053522 Murray et al. Mar 2007 A1
20070076896 Hosaka et al. Apr 2007 A1
20070076898 Sarroukh et al. Apr 2007 A1
20070086600 Boesen Apr 2007 A1
20070088544 Acero et al. Apr 2007 A1
20070098192 Sipkema May 2007 A1
20070189544 Rosenberg Aug 2007 A1
20070230712 Belt et al. Oct 2007 A1
20070237341 Laroche Oct 2007 A1
20070291953 Ngia et al. Dec 2007 A1
20080037801 Alves et al. Feb 2008 A1
20080137873 Goldstein Jun 2008 A1
20080147397 Konig et al. Jun 2008 A1
20080165988 Terlizzi et al. Jul 2008 A1
20080181419 Goldstein et al. Jul 2008 A1
20080187163 Goldstein et al. Aug 2008 A1
20080260180 Goldstein et al. Oct 2008 A1
20080317259 Zhang et al. Dec 2008 A1
20090010444 Goldstein et al. Jan 2009 A1
20090010456 Goldstein et al. Jan 2009 A1
20090024234 Archibald Jan 2009 A1
20090209290 Chen et al. Aug 2009 A1
20100061564 Clemow et al. Mar 2010 A1
20100296668 Lee et al. Nov 2010 A1
20110096939 Ichimura Apr 2011 A1
20110135107 Konchitsky Jun 2011 A1
20110206217 Wels Aug 2011 A1
20110264447 Visser et al. Oct 2011 A1
20110293103 Park et al. Dec 2011 A1
20160104452 Guan et al. Apr 2016 A1
20170142511 Dennis May 2017 A1
20180192179 Liu Jul 2018 A1
20180206025 Rule Jul 2018 A1
20180233125 Mitchell et al. Aug 2018 A1
20190069069 Radin Feb 2019 A1
Foreign Referenced Citations (5)
Number Date Country
1519625 Mar 2005 EP
5639160 Dec 2014 JP
2006037156 Apr 2006 WO
2007085307 Aug 2007 WO
2012078670 Jun 2012 WO
Non-Patent Literature Citations (14)
Entry
Olwal, A. and Feiner S. Interaction Techniques Using Prosodic Features of Speech and Audio Localization. Proceedings of IUI 2005 (International Conference on Intelligent User Interfaces), San Diego, CA, Jan. 9-12, 2005, p. 284-286.
Bernard Widrow, John R. Glover Jr., John M. McCool, John Kaunitz, Charles S. Williams, Robert H. Hearn, James R. Zeidler, Eugene Dong Jr, and Robert C. Goodlin, Adaptive Noise Cancelling: Principles and Applications, Proceedings of the IEEE, vol. 63, No. 12, Dec. 1975.
Mauro Dentino, John M. McCool, and Bernard Widrow, Adaptive Filtering in the Frequency Domain, Proceedings of the IEEE, vol. 66, No. 12, Dec. 1978.
Samsung Electronics Co., Ltd., and Samsung Electronics, America, Inc., v. Staton Techiya, LLC, IPR2022-00282, Dec. 21, 2021.
Samsung Electronics Co., Ltd., and Samsung Electronics, America, Inc., v. Staton Techiya, LLC, IPR2022-00242, Dec. 23, 2021.
Samsung Electronics Co., Ltd., and Samsung Electronics, America, Inc., v. Staton Techiya, LLC, IPR2022-00243, Dec. 23, 2021.
Samsung Electronics Co., Ltd., and Samsung Electronics, America, Inc., v. Staton Techiya, LLC, IPR2022-00234, Dec. 21, 2021.
Samsung Electronics Co., Ltd., and Samsung Electronics, America, Inc., v. Staton Techiya, LLC, IPR2022-00253, Jan. 18, 2022.
Samsung Electronics Co., Ltd., and Samsung Electronics, America, Inc., v. Staton Techiya, LLC, IPR2022-00324, Jan. 13, 2022.
Samsung Electronics Co., Ltd., and Samsung Electronics, America, Inc., v. Staton Techiya, LLC, IPR2022-00281, Jan. 18, 2022.
Samsung Electronics Co., Ltd., and Samsung Electronics, America, Inc., v. Staton Techiya, LLC, IPR2022-00302, Jan. 13, 2022.
Samsung Electronics Co., Ltd., and Samsung Electronics, America, Inc., v. Staton Techiya, LLC, IPR2022-00369, Feb. 18, 2022.
Samsung Electronics Co., Ltd., and Samsung Electronics, America, Inc., v. Staton Techiya, LLC, IPR2022-00388, Feb. 18, 2022.
Samsung Electronics Co., Ltd., and Samsung Electronics, America, Inc., v. Staton Techiya, LLC, IPR2022-00410, Feb. 18, 2022.
Related Publications (1)
Number Date Country
20210168491 A1 Jun 2021 US
Provisional Applications (1)
Number Date Country
62575713 Oct 2017 US
Continuations (2)
Number Date Country
Parent 16555824 Aug 2019 US
Child 17172065 US
Parent 16168752 Oct 2018 US
Child 16555824 US