The present application relates generally to audio processing and, more particularly, to systems and methods for contextual switching of microphones.
It is common for devices such as mobile phones, personal computers (PCs), tablet computers, gaming consoles, and wearables to have more than one microphone and one or more loudspeakers. With every advancing generation of the devices, the market focus has been on enhancing the end-user experience. It may be not feasible to place microphones at desired locations on a mobile phone or other devices due to, for example, waterproof designs, a single piece of glass design, curved screens, battery placement, location of camera, heart rate sensor, speaker size, Infrared (IR)/proximity/humidity/magnetic sensors, and so forth. These enhancements can make a desired performance challenging in various scenarios. For example, given the form factor of a device and the location of microphones and loudspeakers on the device, it is often difficult to achieve the desired noise suppression (NS) and acoustic echo cancellation (AEC) using the same microphone as the primary microphone in different scenarios.
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
Provided are systems and methods for contextual switching of microphones. An example method includes detecting a change of conditions for capturing an acoustic signal by at least two microphones, a configuration being associated with the at least two microphones. The method allows determining that the change of conditions has been stable such that the change has occurred for a pre-determined period of time. In response to the determination, the method includes changing the configuration associated with the at least two microphones.
In various embodiments, the microphones include at least a first microphone and a second microphone. The configuration may include having the first microphone assigned to function as a primary microphone and having the second microphone assigned to function as a secondary microphone. In other embodiments, changing the configuration includes assigning the first microphone to function as the secondary microphone and assigning the second microphone to function as the primary microphone.
In some embodiments, the method further includes adjusting tuning parameters for noise suppression (NS) based on the changed configuration. In certain other embodiments, the method further includes adjusting tuning parameters for acoustic echo cancellation (AEC) based on the changed configuration.
In other embodiments, detecting the change of the conditions includes detecting that the first microphone is occluded and the second microphone is not occluded. Occlusion may be detected, for example, based on the microphone energy level. Changing the configuration may include assigning the second microphone to function as a primary microphone.
In some embodiments, detecting the change of the conditions includes detecting presence of a reverberation. The at least two microphones may comprise at least three microphones. In response to the detecting of the presence of the reverberation, changing the configuration includes selecting a first microphone and a second microphone from the at least three microphones for capturing the acoustic signal. The first microphone and the second microphone may be a pair of the microphones that are separated by a maximum distance.
In various embodiments, the conditions are associated with at least one of the following: absence or presence of far-end speech, a type of background noise, sensitivities of the at least two microphones, and seals of the at least two microphones.
In some embodiments, determining the conditions includes one or more of the following: determining a level of signal-to-noise ratio (SNR) in the acoustic signal and determining a level of signal-to-echo ratio (SER) in the acoustic signal.
According to another example embodiment of the present disclosure, the steps of the method for contextual switching of microphones are stored on a machine-readable medium comprising instructions, which, when implemented by one or more processors, perform the recited steps.
Other example embodiments of the disclosure and aspects will become apparent from the following description taken in conjunction with the following drawings.
Embodiments are illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements.
The technology disclosed herein relates to systems and methods for contextual switching of microphones. Embodiments of the present technology may be practiced with audio devices operable at least to capture and process acoustic signals.
According to an example embodiment, a method for contextual switching of microphones includes detecting a change of conditions for capturing an acoustic signal by at least two microphones. The method allows determining that the change of conditions has been stable for a pre-determined period of time. In response to the determination, the method enables changing the configuration associated with the at least two microphones.
In some embodiments, the transceiver 110 is configured to communicate with a network such as the Internet, Wide Area Network (WAN), Local Area Network (LAN), cellular network, and so forth, to receive and/or transmit an audio data stream. The received audio data stream may then be forwarded to the audio processing system 150 and the loudspeaker 140.
The processor 130 includes hardware and software that implement the processing of audio data and various other operations depending on a type of the audio device 100 (e.g., communication device and computer), according to some embodiments. A memory (e.g., non-transitory computer readable storage medium) is operable to store, at least in part, instructions and data for execution by processor 130.
In various embodiments, the audio processing system 150 includes hardware and software that implement the encoding of acoustic signal(s). For example, the audio processing system 150 is configured to receive acoustic signals from an acoustic source via microphone(s) 120 (which may be one or more microphones or acoustic sensors) and process the acoustic signals. After reception by the microphone(s) 120, the acoustic signals may be converted into electrical signals by an analog-to-digital converter. In some embodiments, the processing of acoustic signal(s) includes NS and/or AEC. Noise is unwanted sound including street noise, ambient noise, and speech from entities other than an intended speaker. For example, noise sources include a working air conditioner, ventilation fans, TV sets, mobile phones, stereo audio systems, and the like. Certain kinds of noise may arise from both operation of machines (for example, cars) and environments in which they operate (for example, a road, track, tire, wheel, fan, wiper blade, engine, exhaust, entertainment system, wind, rain, waves, and the like).
An example audio processing system suitable for performing noise suppression is discussed in more detail in U.S. patent application Ser. No. 12/832,901 (now U.S. Pat. No. 8,473,287), entitled “Method for Jointly Optimizing Noise Reduction and Voice Quality in a Mono or Multi-Microphone System,” filed Jul. 8, 2010, the disclosure of which is incorporated herein by reference for all purposes. By way of example and not limitation, noise suppression methods are described in U.S. patent application Ser. No. 12/215,980 (now U.S. Pat. No. 9,185,487), entitled “System and Method for Providing Noise Suppression Utilizing Null Processing Noise Subtraction,” filed Jun. 30, 2008, and in U.S. patent application Ser. No. 11/699,732 (now U.S. Pat. No. 8,194,880), entitled “System and Method for Utilizing Omni-Directional Microphones for Speech Enhancement,” filed Jan. 29, 2007, which are incorporated herein by reference in their entireties.
The loudspeaker 140 is a device that provides an audio output to a listener. In some embodiments, the audio device 100 further includes a class-D output, an earpiece of a headset, or a handset on the audio device 100.
In various embodiments, sensors 160 include, but are not limited to, an accelerometer, magnetometer, gyroscope, Inertial Measurement Unit (IMU), temperature sensor, altitude sensor, proximity sensor, barometer, humidity sensor, color sensor, light sensor, pressure sensor, GPS module, a beacon, WiFi sensor, ultrasound sensor, infrared sensor, touch sensor, and the like. In some embodiments, the sensor data can be used to estimate conditions and context for capturing acoustic signals by microphone(s) 120.
In various embodiments, each of the microphones 120a, 120b, and 120c is operable to provide predetermined functionality. In a typical situation, when a user speaks during a call on the audio device 100, it is recommended that the microphone closest to a target talker's mouth is configured to serve as the primary microphone on the audio device. In this instance, as shown in
In an exemplary scenario, when the loudspeaker 140 is active, the designated microphone 120a, which serves as the primary microphone in this example, can pick up strong echoes due to its close proximity to the loudspeaker 140. In this scenario, it is preferred that the primary microphone that is assigned to capture a target talker be the farthest microphone from the loudspeaker 140. For this example, as shown in
According to various embodiments, the technologies described herein allow dynamically switching one or more microphone(s) based on near-end (target talker) and far-end conditions. The contextual switching can be based on one or more of the following factors: absence or presence of far-end speech (echo), absence or presence of reverberation, type of background noise, and microphone characteristics such as sensitivities and seals. In some embodiments, the contextual switching is based on values of signal-to-noise ratio (SNR) of signals captured by different microphones 120 of the audio device 100. For example, assigning which of the two microphones is a primary microphone and which is a secondary microphone can be based on determining which of the microphones 120 provides a low SNR and a high SNR at the current moment. Similarly, in certain embodiments, the contextual microphone switching is based on a signal-to-echo ratio (SER) in signals captured by different microphones 120 of audio device 100.
In some embodiments, one of the microphones 120, for example microphone 120a, can be occluded. For example, the microphone located at the bottom of the audio device 100 (the microphone 120a in
In various embodiments, the two states 310 and 320 in
1) Tuning for aggressiveness of NS: more aggressive in low SNR conditions and less suppression in high SNR conditions for stationary or non-stationary distractors;
2) Robust tuning for AEC: based on detection of far-end activity, switch primary microphone to be farthest from the loudspeaker or adjust gains on the microphone closest to the loudspeaker to avoid clipping;
3) Reverberant conditions: when reverberant conditions are detected, use microphones that are separated by a maximum distance to remove reverberation from a target speech; and
4) Microphone occlusion: if the microphone is occluded due to a mobile phone case, a user's hand, or a cup holder covering the microphone, switch to using available non-occluded microphone(s).
Condition cues for switching between the states are checked in blocks 330 and 340, which are also referred to as “Check cues for switch” blocks. In the blocks 330 and 340, raw features are used for recognizing conditions for making a switch between the states. In various embodiments, the subset of cues used for making the decision includes, but is not limited to:
If conditions for a switch are met, then in blocks 350 and 360, a check for the stability of cues for a pre-determined period of time is executed. In various embodiments, the pre-determined period of time is in a range from approximately 20 milliseconds-50 milliseconds. The transition between the state 310 and the state 320 is executed in response to the conditions for a switch being met for a pre-determined amount of time. Otherwise, the existing configuration of microphones and the associated tuning parameters continue to be used.
The example method 400 includes determining that the change of conditions has been stable for a pre-determined period of time in block 420. In block 430, in response to the determination, the example method 400 includes switching a configuration associated with the microphones.
In some embodiments, the example method 400 includes optionally adjusting tuning parameters for noise suppression based on the changed configuration in block 440. In other embodiments, the example method 400 includes optionally adjusting tuning parameters for acoustic echo cancellation based on the changed configuration in block 450.
In block 620, the method 600 includes selecting a first microphone and a second microphone from the at least three microphones for capturing the acoustic signal. The first and the second microphones may be separated by a maximum distance, the first and the second microphones being utilized to remove the reverberation in the captured acoustic signal.
The components shown in
Mass data storage 730, which can be implemented with a magnetic disk drive, solid state drive, or an optical disk drive, is a non-volatile storage device for storing data and instructions for use by processor unit(s) 710. Mass data storage 730 stores the system software for implementing embodiments of the present disclosure for purposes of loading that software into main memory 720.
Portable storage device 740 operates in conjunction with a portable non-volatile storage medium, such as a flash drive, floppy disk, compact disk, digital video disc, or Universal Serial Bus (USB) storage device, to input and output data and code to and from the computer system 700 of
User input devices 760 can provide a portion of a user interface. User input devices 760 may include one or more microphones, an alphanumeric keypad, such as a keyboard, for inputting alphanumeric and other information, or a pointing device, such as a mouse, a trackball, stylus, or cursor direction keys. User input devices 760 can also include a touchscreen. Additionally, the computer system 700 as shown in
Graphics display system 770 includes a liquid crystal display (LCD) or other suitable display device. Graphics display system 770 is configurable to receive textual and graphical information and process the information for output to the display device.
Peripheral devices 780 may include any type of computer support device to add additional functionality to the computer system.
The components provided in the computer system 700 of
The processing for various embodiments may be implemented in software that is cloud-based. In some embodiments, the computer system 700 is implemented as a cloud-based computing environment, such as a virtual machine operating within a computing cloud. In other embodiments, the computer system 700 may itself include a cloud-based computing environment, where the functionalities of the computer system 700 are executed in a distributed fashion. Thus, the computer system 700, when configured as a computing cloud, may include pluralities of computing devices in various forms, as will be described in greater detail below.
In general, a cloud-based computing environment is a resource that typically combines the computational power of a large grouping of processors (such as within web servers) and/or that combines the storage capacity of a large grouping of computer memories or storage devices. Systems that provide cloud-based resources may be utilized exclusively by their owners or such systems may be accessible to outside users who deploy applications within the computing infrastructure to obtain the benefit of large computational or storage resources.
The cloud may be formed, for example, by a network of web servers that comprise a plurality of computing devices, such as the computer system 700, with each server (or at least a plurality thereof) providing processor and/or storage resources. These servers may manage workloads provided by multiple users (e.g., cloud resource customers or other users). Typically, each user places workload demands upon the cloud that vary in real-time, sometimes dramatically. The nature and extent of these variations typically depends on the type of business associated with the user.
The present technology is described above with reference to example embodiments. Therefore, other variations upon the exemplary embodiments are intended to be covered by the present disclosure.
The present application claims the benefit of U.S. Provisional Patent Application No. 62/110,171, filed Jan. 30, 2015. The subject matter of the aforementioned application is incorporated herein by reference for all purposes.
Number | Name | Date | Kind |
---|---|---|---|
4025724 | Davidson, Jr. et al. | May 1977 | A |
4802227 | Elko et al. | Jan 1989 | A |
4969203 | Herman | Nov 1990 | A |
5115404 | Lo et al. | May 1992 | A |
5289273 | Lang | Feb 1994 | A |
5440751 | Santeler et al. | Aug 1995 | A |
5544346 | Amini et al. | Aug 1996 | A |
5555306 | Gerzon | Sep 1996 | A |
5625697 | Bowen et al. | Apr 1997 | A |
5715319 | Chu | Feb 1998 | A |
5734713 | Mauney et al. | Mar 1998 | A |
5774837 | Yeldener et al. | Jun 1998 | A |
5819215 | Dobson et al. | Oct 1998 | A |
5850453 | Klayman et al. | Dec 1998 | A |
5978567 | Rebane et al. | Nov 1999 | A |
5991385 | Dunn et al. | Nov 1999 | A |
6011853 | Koski et al. | Jan 2000 | A |
6035177 | Moses et al. | Mar 2000 | A |
6065883 | Herring et al. | May 2000 | A |
6084916 | Ott | Jul 2000 | A |
6144937 | Ali | Nov 2000 | A |
6188769 | Jot et al. | Feb 2001 | B1 |
6219408 | Kurth | Apr 2001 | B1 |
6281749 | Klayman et al. | Aug 2001 | B1 |
6327370 | Killion et al. | Dec 2001 | B1 |
6381284 | Strizhevskiy | Apr 2002 | B1 |
6381469 | Wojick | Apr 2002 | B1 |
6389142 | Hagen et al. | May 2002 | B1 |
6480610 | Fang et al. | Nov 2002 | B1 |
6504926 | Edelson et al. | Jan 2003 | B1 |
6615169 | Ojala et al. | Sep 2003 | B1 |
6717991 | Gustafsson et al. | Apr 2004 | B1 |
6748095 | Goss | Jun 2004 | B1 |
6768979 | Menendez-Pidal et al. | Jul 2004 | B1 |
6873837 | Yoshioka et al. | Mar 2005 | B1 |
6882736 | Dickel et al. | Apr 2005 | B2 |
6931123 | Hughes | Aug 2005 | B1 |
6980528 | LeBlanc et al. | Dec 2005 | B1 |
7010134 | Jensen | Mar 2006 | B2 |
RE39080 | Johnston | Apr 2006 | E |
7028547 | Shiratori et al. | Apr 2006 | B2 |
7035666 | Silberfenig et al. | Apr 2006 | B2 |
7058572 | Nemer | Jun 2006 | B1 |
7103176 | Rodriguez et al. | Sep 2006 | B2 |
7145710 | Holmes | Dec 2006 | B2 |
7171008 | Elko | Jan 2007 | B2 |
7190775 | Rambo | Mar 2007 | B2 |
7221622 | Matsuo et al. | May 2007 | B2 |
7245710 | Hughes | Jul 2007 | B1 |
7447631 | Truman et al. | Nov 2008 | B2 |
7548791 | Johnston | Jun 2009 | B1 |
7562140 | Clemm et al. | Jul 2009 | B2 |
7617282 | Han | Nov 2009 | B2 |
7664495 | Bonner et al. | Feb 2010 | B1 |
7685132 | Hyman | Mar 2010 | B2 |
7773741 | LeBlanc et al. | Aug 2010 | B1 |
7791508 | Wegener | Sep 2010 | B2 |
7796978 | Jones et al. | Sep 2010 | B2 |
7899565 | Johnston | Mar 2011 | B1 |
7940986 | Mekenkamp et al. | May 2011 | B2 |
7970123 | Beaucoup | Jun 2011 | B2 |
7978178 | Pehlivan et al. | Jul 2011 | B2 |
8036767 | Soulodre | Oct 2011 | B2 |
8175291 | Chan et al. | May 2012 | B2 |
8189429 | Chen et al. | May 2012 | B2 |
8194880 | Avendano | Jun 2012 | B2 |
8204253 | Solbach | Jun 2012 | B1 |
8229137 | Romesburg | Jul 2012 | B2 |
8233352 | Beaucoup | Jul 2012 | B2 |
8363823 | Santos | Jan 2013 | B1 |
8369973 | Risbo | Feb 2013 | B2 |
8467891 | Huang et al. | Jun 2013 | B2 |
8531286 | Friar et al. | Sep 2013 | B2 |
8606249 | Goodwin | Dec 2013 | B1 |
8615392 | Goodwin | Dec 2013 | B1 |
8615394 | Avendano et al. | Dec 2013 | B1 |
8639516 | Lindahl et al. | Jan 2014 | B2 |
8694310 | Taylor | Apr 2014 | B2 |
8705759 | Wolff et al. | Apr 2014 | B2 |
8712069 | Murgia et al. | Apr 2014 | B1 |
8750526 | Santos et al. | Jun 2014 | B1 |
8774423 | Solbach | Jul 2014 | B1 |
8775128 | Meduna et al. | Jul 2014 | B2 |
8798290 | Choi et al. | Aug 2014 | B1 |
8903721 | Cowan | Dec 2014 | B1 |
9007416 | Murgia et al. | Apr 2015 | B1 |
9197974 | Clark et al. | Nov 2015 | B1 |
9210503 | Avendano et al. | Dec 2015 | B2 |
9438992 | Every | Sep 2016 | B2 |
20020041678 | Basburg-Ertem et al. | Apr 2002 | A1 |
20020071342 | Marple et al. | Jun 2002 | A1 |
20020138263 | Deligne et al. | Sep 2002 | A1 |
20020160751 | Sun et al. | Oct 2002 | A1 |
20020177995 | Walker | Nov 2002 | A1 |
20030023430 | Wang et al. | Jan 2003 | A1 |
20030056220 | Thornton et al. | Mar 2003 | A1 |
20030093279 | Malah et al. | May 2003 | A1 |
20030099370 | Moore | May 2003 | A1 |
20030107888 | Devlin et al. | Jun 2003 | A1 |
20030118200 | Beaucoup et al. | Jun 2003 | A1 |
20030147538 | Elko | Aug 2003 | A1 |
20030177006 | Ichikawa et al. | Sep 2003 | A1 |
20030179888 | Burnett et al. | Sep 2003 | A1 |
20040001450 | He et al. | Jan 2004 | A1 |
20040066940 | Amir | Apr 2004 | A1 |
20040076190 | Goel et al. | Apr 2004 | A1 |
20040102967 | Furuta et al. | May 2004 | A1 |
20040145871 | Lee | Jul 2004 | A1 |
20040184882 | Cosgrove | Sep 2004 | A1 |
20050008169 | Muren et al. | Jan 2005 | A1 |
20050033200 | Soehren et al. | Feb 2005 | A1 |
20050080616 | Leung et al. | Apr 2005 | A1 |
20050094610 | de Clerq et al. | May 2005 | A1 |
20050114123 | Lukac et al. | May 2005 | A1 |
20050172311 | Hjelt et al. | Aug 2005 | A1 |
20050213739 | Rodman et al. | Sep 2005 | A1 |
20050240399 | Makinen | Oct 2005 | A1 |
20050249292 | Zhu | Nov 2005 | A1 |
20050261896 | Schuijers et al. | Nov 2005 | A1 |
20050267369 | Lazenby et al. | Dec 2005 | A1 |
20050276363 | Joublin et al. | Dec 2005 | A1 |
20050281410 | Grosvenor et al. | Dec 2005 | A1 |
20050283544 | Yee | Dec 2005 | A1 |
20060063560 | Herle | Mar 2006 | A1 |
20060092918 | Talalai | May 2006 | A1 |
20060100868 | Hetherington et al. | May 2006 | A1 |
20060122832 | Takiguchi et al. | Jun 2006 | A1 |
20060136203 | Ichikawa | Jun 2006 | A1 |
20060206320 | Li | Sep 2006 | A1 |
20060224382 | Taneda | Oct 2006 | A1 |
20060282263 | Vos et al. | Dec 2006 | A1 |
20070003097 | Langberg et al. | Jan 2007 | A1 |
20070005351 | Sathyendra et al. | Jan 2007 | A1 |
20070025562 | Zalewski et al. | Feb 2007 | A1 |
20070033020 | Francois et al. | Feb 2007 | A1 |
20070041589 | Patel et al. | Feb 2007 | A1 |
20070058822 | Ozawa | Mar 2007 | A1 |
20070064817 | Dunne et al. | Mar 2007 | A1 |
20070081075 | Canova, Jr. et al. | Apr 2007 | A1 |
20070127668 | Ahya et al. | Jun 2007 | A1 |
20070185587 | Kondo | Aug 2007 | A1 |
20070253574 | Soulodre | Nov 2007 | A1 |
20070273583 | Rosenberg | Nov 2007 | A1 |
20070282604 | Gartner et al. | Dec 2007 | A1 |
20070287490 | Green et al. | Dec 2007 | A1 |
20080069366 | Soulodre | Mar 2008 | A1 |
20080111734 | Fam et al. | May 2008 | A1 |
20080159507 | Virolainen et al. | Jul 2008 | A1 |
20080160977 | Ahmaniemi et al. | Jul 2008 | A1 |
20080187143 | Mak-Fan | Aug 2008 | A1 |
20080192955 | Merks | Aug 2008 | A1 |
20080233934 | Diethorn | Sep 2008 | A1 |
20080247567 | Kjolerbakken et al. | Oct 2008 | A1 |
20080259731 | Happonen | Oct 2008 | A1 |
20080298571 | Kurtz et al. | Dec 2008 | A1 |
20080304677 | Abolfathi et al. | Dec 2008 | A1 |
20080317259 | Zhang et al. | Dec 2008 | A1 |
20090034755 | Short et al. | Feb 2009 | A1 |
20090060222 | Jeong et al. | Mar 2009 | A1 |
20090063143 | Schmidt et al. | Mar 2009 | A1 |
20090089054 | Wang et al. | Apr 2009 | A1 |
20090116656 | Lee et al. | May 2009 | A1 |
20090134829 | Baumann et al. | May 2009 | A1 |
20090141908 | Jeong et al. | Jun 2009 | A1 |
20090147942 | Culter | Jun 2009 | A1 |
20090150149 | Culter et al. | Jun 2009 | A1 |
20090164905 | Ko | Jun 2009 | A1 |
20090167862 | Jentoft et al. | Jul 2009 | A1 |
20090192791 | El-Maleh et al. | Jul 2009 | A1 |
20090204413 | Sintes et al. | Aug 2009 | A1 |
20090226010 | Schnell et al. | Sep 2009 | A1 |
20090240497 | Usher et al. | Sep 2009 | A1 |
20090264114 | Virolainen et al. | Oct 2009 | A1 |
20090303350 | Terada | Dec 2009 | A1 |
20090323655 | Cardona et al. | Dec 2009 | A1 |
20090323925 | Sweeney et al. | Dec 2009 | A1 |
20090323981 | Cutler | Dec 2009 | A1 |
20090323982 | Solbach et al. | Dec 2009 | A1 |
20100017205 | Visser et al. | Jan 2010 | A1 |
20100033427 | Marks et al. | Feb 2010 | A1 |
20100036659 | Haulick et al. | Feb 2010 | A1 |
20100081487 | Chen et al. | Apr 2010 | A1 |
20100092007 | Sun | Apr 2010 | A1 |
20100105447 | Sibbald et al. | Apr 2010 | A1 |
20100128123 | DiPoala | May 2010 | A1 |
20100130198 | Kannappan et al. | May 2010 | A1 |
20100134241 | Gips et al. | Jun 2010 | A1 |
20100157168 | Dunton et al. | Jun 2010 | A1 |
20100174506 | Joseph et al. | Jul 2010 | A1 |
20100210975 | Anthony et al. | Aug 2010 | A1 |
20100215184 | Buck et al. | Aug 2010 | A1 |
20100217837 | Ansari et al. | Aug 2010 | A1 |
20100245624 | Beaucoup | Sep 2010 | A1 |
20100278352 | Petit et al. | Nov 2010 | A1 |
20100303298 | Marks et al. | Dec 2010 | A1 |
20100315482 | Rosenfeld et al. | Dec 2010 | A1 |
20110038486 | Beaucoup | Feb 2011 | A1 |
20110038557 | Closset et al. | Feb 2011 | A1 |
20110044324 | Li et al. | Feb 2011 | A1 |
20110075857 | Aoyagi | Mar 2011 | A1 |
20110081024 | Soulodre | Apr 2011 | A1 |
20110081026 | Ramakrishnan et al. | Apr 2011 | A1 |
20110107367 | Georgis et al. | May 2011 | A1 |
20110125063 | Shalon et al. | May 2011 | A1 |
20110129095 | Avendano et al. | Jun 2011 | A1 |
20110173006 | Nagel et al. | Jul 2011 | A1 |
20110173542 | Imes et al. | Jul 2011 | A1 |
20110182436 | Murgia et al. | Jul 2011 | A1 |
20110191101 | Uhle et al. | Aug 2011 | A1 |
20110224994 | Norvell et al. | Sep 2011 | A1 |
20110280154 | Silverstrim et al. | Nov 2011 | A1 |
20110286605 | Furuta et al. | Nov 2011 | A1 |
20110300806 | Lindahl et al. | Dec 2011 | A1 |
20110305345 | Bouchard et al. | Dec 2011 | A1 |
20120027217 | Jun et al. | Feb 2012 | A1 |
20120050582 | Seshadri et al. | Mar 2012 | A1 |
20120062729 | Hart et al. | Mar 2012 | A1 |
20120116769 | Malah et al. | May 2012 | A1 |
20120133728 | Lee | May 2012 | A1 |
20120169482 | Chen et al. | Jul 2012 | A1 |
20120182429 | Forutanpour et al. | Jul 2012 | A1 |
20120202485 | Mirbaha et al. | Aug 2012 | A1 |
20120209611 | Furuta et al. | Aug 2012 | A1 |
20120231778 | Chen et al. | Sep 2012 | A1 |
20120249785 | Sudo et al. | Oct 2012 | A1 |
20120250882 | Mohammad et al. | Oct 2012 | A1 |
20120257778 | Hall et al. | Oct 2012 | A1 |
20120265716 | Hunzinger et al. | Oct 2012 | A1 |
20120316784 | Chrysanthakopoulos | Dec 2012 | A1 |
20120317149 | Jagota et al. | Dec 2012 | A1 |
20130034243 | Yermeche et al. | Feb 2013 | A1 |
20130051543 | McDysan et al. | Feb 2013 | A1 |
20130080843 | Stergiou et al. | Mar 2013 | A1 |
20130182857 | Namba et al. | Jul 2013 | A1 |
20130282372 | Visser et al. | Oct 2013 | A1 |
20130322461 | Poulsen | Dec 2013 | A1 |
20130325616 | Ramde et al. | Dec 2013 | A1 |
20130332156 | Tackin et al. | Dec 2013 | A1 |
20130332171 | Avendano et al. | Dec 2013 | A1 |
20140003622 | Ikizyan et al. | Jan 2014 | A1 |
20140129178 | Meduna et al. | May 2014 | A1 |
20140181715 | Axelrod et al. | Jun 2014 | A1 |
20140187258 | Khorashadi et al. | Jul 2014 | A1 |
20140274218 | Kadiwala | Sep 2014 | A1 |
20150012248 | Meduna et al. | Jan 2015 | A1 |
Number | Date | Country |
---|---|---|
1536660 | Jun 2005 | EP |
20125600 | Jun 2012 | FI |
H05300419 | Nov 1993 | JP |
H07336793 | Dec 1995 | JP |
2006515490 | May 2006 | JP |
2007201818 | Aug 2007 | JP |
2008542798 | Nov 2008 | JP |
2009037042 | Feb 2009 | JP |
2013513306 | Apr 2013 | JP |
5855571 | Dec 2015 | JP |
1020120101457 | Sep 2012 | KR |
201043475 | Dec 2011 | TW |
WO8400634 | Feb 1984 | WO |
WO2004047011 | Jun 2004 | WO |
WO2008034221 | Mar 2008 | WO |
WO2009093161 | Jul 2009 | WO |
WO2009132920 | Nov 2009 | WO |
WO2011068901 | Jun 2011 | WO |
WO2011092549 | Aug 2011 | WO |
WO2012094522 | Jul 2012 | WO |
WO2013188562 | Dec 2013 | WO |
WO2014074268 | May 2014 | WO |
WO2014127543 | Aug 2014 | WO |
Entry |
---|
International Search Report and Written Opinion mailed May 23, 2012 in Patent Cooperation Treaty Application No. PCT/US2012/020365, filed May 1, 2012. |
Hjorth, Bo. “EEG Analysis Based on Time Domain Properties.” Electroencephalography and Clinical Neurophysiology, vol. 29, No. 3 (1970). pp. 306-310. |
Hjorth, Bo. “The Physical Significance of Time Domain Descriptions in EEG Analysis,” Electroencephalography and Clinical Neurophysiology, vol. 34, No. 3, (1973). pp. 321-325. |
Hjorth, Bo. “An On-line Transformation of EEG Scalp Potentials into Orthogonal Source Derivations,” Electroencephalography and Clinical Neurophysiology, vol. 39, No. 5 (1975). pp. 526-530. |
Jimenez et al., “A Comparison of Pedestrian Dead-Reckoning Algorithms Using a Low-Cost MEMS IMU,” WISP 2009. 6th IEEE International Symposium on Intelligent Signal Processing, Aug. 26-28, 2009. pp. 37-42. |
International Search Report and Written Opinion mailed May 7, 2014 in Patent Cooperation Treaty Application No. PCT/US2013/064645 filed Oct. 11, 2013. |
International Search Report and Written Opinion mailed Apr. 8, 2016 in Patent Cooperation Treaty Application No. PCT/US2016/015801. |
Non-Final Office Action, May 9, 2014, U.S. Appl. No. 13/343,654, filed Jan. 4, 2012. |
Final Office Action, Aug. 4, 2014, U.S. Appl. No. 13/343,654, Jan. 4, 2012. |
Non-Final Office Action, Feb. 26, 2015, U.S. Appl. No. 13/343,654, filed Jan. 4, 2012. |
Final Office Action, Sep. 8, 2015, U.S. Appl. No. 13/343,654, Jan. 4, 2012. |
Non-Final Office Action, Dec. 17, 2014, U.S. Appl. No. 14/321,707, filed Jul. 1, 2014. |
Final Office Action, Aug. 7, 2015, U.S. Appl. No. 14/321,707, filed Jul. 1, 2014. |
Non-Final Office Action, Mar. 18, 2016, U.S. Appl. No. 14/321,707, filed Jul. 1, 2014. |
Non-Final Office Action, Mar. 31, 2016, U.S. Appl. No. 14/683,057, filed Apr. 9, 2015. |
International Search Report and Written Opinion dated Feb. 7, 2011 in Patent Cooperation Treaty Application No. PCT/US10/58600. |
International Search Report dated Dec. 20, 2013 in Patent Cooperation Treaty Application No. PCT/US2013/045462, filed Jun. 12, 2013. |
Office Action dated Aug. 26, 2014 in Japan Application No. 2012-542167, filed Dec. 1, 2010. |
Office Action mailed Oct. 31, 2014 in Finland Patent Application No. 20125600, filed Jun. 1, 2012. |
Office Action mailed Jul. 21, 2015 in Japan Patent Application No. 2012-542167, filed Dec. 1, 2010. |
Office Action mailed Sep. 29, 2015 in Finland Patent Application No. 20125600, filed Dec. 1, 2010. |
Allowance mailed Nov. 17, 2015 in Japan Patent Application No. 2012-542167, filed Dec. 1, 2010. |
International Search Report & Written Opinion dated Dec. 14, 2015 in Patent Cooperation Treaty Application No. PCT/US2015/049816, filed Sep. 11, 2015. |
International Search Report & Written Opinion dated Dec. 22, 2015 in Patent Cooperation Treaty Application No. PCT/US2015/052433, filed Sep. 25, 2015. |
International Search Report & Written Opinion dated Feb. 11, 2016 in Patent Cooperation Treaty Application No. PCT/US2015/063519, filed Dec. 2, 2015. |
Number | Date | Country | |
---|---|---|---|
20160227336 A1 | Aug 2016 | US |
Number | Date | Country | |
---|---|---|---|
62110171 | Jan 2015 | US |