The present disclosure relates to devices, methods, and/or systems for monitoring a user's physiological information using an auricular device configured with an indicator and a beamformer filter unit.
Hospitals, nursing homes, and other patient care facilities typically utilize a number of sensors, devices, and/or monitors to collect or analyze a user's (which may also be referred to as a “subject”, “wearer,” “individual” or “patient” and/or the like) physiological parameters such as blood oxygen saturation level, temperature, respiratory rate, pulse rate, blood pressure, and the like. Such devices can include, for example, acoustic sensors, electroencephalogram (EEG) sensors, electrocardiogram (ECG) devices, blood pressure monitors, temperature sensors, and pulse oximeters, among others. In medical environments, various sensors/devices (such as those just mentioned) can be attached to a patient and connected to one or more patient monitoring devices using cables or via wireless connection. Patient monitoring devices generally include sensors, processing equipment, and displays for obtaining and analyzing a medical patient's physiological parameters. Clinicians, including doctors, nurses, and other medical personnel, use the physiological parameters obtained from patient monitors to determine a patient's physiological status, diagnose illnesses, and to prescribe treatments. Clinicians also use the physiological parameters to monitor patients during various clinical situations to determine whether to increase the level of medical care given to patients.
In some aspects, the techniques described herein relate to a system including: an external device configured to transmit a first audio data to an ear-bud, the first audio data corresponding to a sound received form an audio source, wherein the external device is at a first location with respect to the audio source; and the ear-bud configured to be positioned within an ear canal of a user, the ear-bud including: a microphone configured to generate audio data responsive to detecting audio; a storage device, configured to store computer-executable instructions; and one or more processors in communication with the storage device, wherein the computer-executable instructions, when executed by the one or more processors, cause the one or more processors to: receive the first audio data from the external device; receive a second audio data from the microphone, wherein the second audio data corresponds to a sound received form the audio source, wherein the ear-bud is at a second location with respect to the audio source; estimate an acoustic environment based on the first audio data and the second audio data, the acoustic environment including at least a first distance between the ear-bud and the audio source, and a second distance between the external device and the audio source; generate a third audio data based on the acoustic environment, the first audio data, and the second audio data; and cause a speaker to emit the third audio data within the ear canal of the user, such that the user perceives a sound as originating from the second location and having an orientation of the ear-bud.
In some aspects, the techniques described herein relate to a system, wherein the computer-executable instructions, when executed by the one or more processors, further cause the one or more processors to: in response to receiving the first audio data and the second audio data, determine a first distance between the ear-bud and the audio source, and a second distance between the external device and the audio source; and determine a first orientation of the ear-bud, and a second orientation of the external device.
In some aspects, the techniques described herein relate to a system, wherein the ear-bud further includes: a second microphone configured to generate audio data responsive to detecting audio; and wherein the computer-executable instructions, when executed by the one or more processors, further cause the one or more processors to.
In some aspects, the techniques described herein relate to a system, wherein the external device further includes a first microphone and a second microphone configured to generate audio data responsive to detecting audio; and wherein the computer-executable instructions, when executed by the one or more processors, further cause the one or more processors to: determine the second distance based on a comparison of the first audio data received from the first microphone and the second microphone of the external device.
In some aspects, the techniques described herein relate to a system, wherein the first orientation includes an estimated heading of the ear-bud and a direction of the second audio data, and wherein the second orientation includes an estimated heading of the external device and a direction of the first audio data.
In some aspects, the techniques described herein relate to a system, wherein the third audio data is generated based on spatially processing the first audio data and the second audio data.
In some aspects, the techniques described herein relate to a system, wherein the first orientation of the ear-bud is generated at least in part by an IMU.
In some aspects, the techniques described herein relate to a system, wherein the external device is at least one of a case, a podium, or a desktop microphone.
In some aspects, the techniques described herein relate to an ear-bud configured to be positioned within an ear canal of a user, the ear-bud including: a microphone configured to generate audio data responsive to detecting audio; a storage device, configured to store computer-executable instructions; and one or more processors in communication with the storage device, wherein the computer-executable instructions, when executed by the one or more processors, cause the one or more processors to: receive a first audio data from an external device, the first audio data corresponding to a sound received form an audio source, wherein the external device is at a first location with respect to the audio source; receive a second audio data from the microphone, wherein the second audio data corresponds to a sound received form the audio source, wherein the ear-bud is at a second location with respect to the audio source; estimate an acoustic environment based on the first audio data and the second audio data, the acoustic environment including at least a first distance between the ear-bud and the audio source, and a second distance between the external device and the audio source; generate a third audio data based on the acoustic environment, the first audio data, and the second audio data; and cause a speaker to emit the third audio data within the ear canal of the user, such that the user perceives a sound as originating from the second location and having an orientation of the ear-bud.
In some aspects, the techniques described herein relate to an ear-bud, wherein the computer-executable instructions, when executed by the one or more processors, further cause the one or more processors to: in response to receiving the first audio data and the second audio data, determine the first distance between the ear-bud and the audio source, and the second distance between the external device and the audio source; and determine a first orientation of the ear-bud, and a second orientation of the external device.
In some aspects, the techniques described herein relate to an ear-bud, further including: a second microphone configured to generate audio data responsive to detecting audio; and wherein the computer-executable instructions, when executed by the one or more processors, further cause the one or more processors to: determine the first distance based on a comparison of the second audio data detected at the microphone and at the second microphone.
In some aspects, the techniques described herein relate to an ear-bud, wherein the computer-executable instructions, when executed by the one or more processors, further cause the one or more processors to: determine the second distance based on a comparison of the first audio data received from a first microphone and a second microphone of the external device.
In some aspects, the techniques described herein relate to an ear-bud, wherein the first orientation includes an estimated heading of the ear-bud and a direction of the second audio data, and wherein the second orientation includes an estimated heading of the external device and a direction of the first audio data.
In some aspects, the techniques described herein relate to an ear-bud, wherein the third audio data is generated based on spatially processing the first audio data and the second audio data.
In some aspects, the techniques described herein relate to an ear-bud, wherein the first orientation of the ear-bud is generated at least in part by an IMU.
In some aspects, the techniques described herein relate to an ear-bud, wherein the first audio data is received from a case, a podium, or a desktop microphone.
In some aspects, the techniques described herein relate to a method including: receiving a first audio data from an external device, the first audio data corresponding to a sound received form an audio source, wherein the external device is at a first location with respect to the audio source; receiving a second audio data from a microphone of an ear-bud, wherein the second audio data corresponds to a sound received form the audio source, wherein the ear-bud is at a second location with respect to the audio source; estimate an acoustic environment based on the first audio data and the second audio data, the acoustic environment including at least a first distance between the ear-bud and the audio source, and a second distance between the external device and the audio source; generate a third audio data based on the acoustic environment, the first audio data, and the second audio data; and cause a speaker to emit the third audio data within an ear canal of a user, such that the user perceives a sound as originating from the second location and having an orientation of the ear-bud.
In some aspects, the techniques described herein relate to a method, further including: in response to receiving the first audio data and the second audio data, determining a first distance between the ear-bud and the audio source, and a second distance between the external device and the audio source; and determining a first orientation of the ear-bud, and a second orientation of the external device.
In some aspects, the techniques described herein relate to a method, further including: wherein the first orientation of the ear-bud is generated at least in part by an IMU.
In some aspects, the techniques described herein relate to a method, wherein the third audio data is generated based on spatially processing the first audio data and the second audio data.
For purposes of summarizing the disclosure, certain aspects, advantages, and novel features are discussed herein. It is to be understood that not necessarily all such aspects, advantages, or features will be embodied in any particular embodiment of the disclosure, and an artisan would recognize from the disclosure herein a myriad of combinations of such aspects, advantages, or features.
Example features of the present disclosure, its nature and various advantages will be apparent from the accompanying drawings and the following detailed description of various implementations. Non-limiting and non-exhaustive implementations are described with reference to the accompanying drawings, wherein like labels or reference numbers refer to like parts throughout the various views unless otherwise specified. The sizes and relative positions of elements in the drawings are not necessarily drawn to scale. For example, the shapes of various elements may be selected, enlarged, and positioned to improve drawing legibility. The particular shapes of the elements as drawn have been selected for ease of recognition in the drawings.
Various features and advantages of this disclosure will now be described with reference to the accompanying figures. The following description is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses. This disclosure extends beyond the specifically disclosed embodiments and/or uses and obvious modifications and equivalents thereof. Thus, it is intended that the scope of this disclosure should not be limited by any particular embodiments described below. The features of the illustrated embodiments can be modified, combined, removed, and/or substituted as will be apparent to those of ordinary skill in the art upon consideration of the principles disclosed herein.
Hearing loss affects almost half of the United States population over 65 years old. Aging and chronic exposure to loud noises can both contribute to hearing loss. Although there are steps to improve one's hearing, most types of hearing loss cannot be reversed. Several symptoms of hearing loss can include muffling speech and other words, difficulty understanding words especially against background noise or in a crowd, and trouble hearing consonants. Difficulty hearing can occur gradually and affect daily life, and a patient's hearing loss may vary from left ear to right ear. Moreover, patients suffering from hearing loss may require further monitoring of one or more physiological parameters. In addition to hearing loss, a patient may desire to monitor one or more physiological parameters such as the patient's oxygen saturation level and/or body temperature, and have such physiological parameters transmitted to the patient, medical professionals (providers), or to a medical database.
A patient desiring to address hearing loss may seek the care of a medical professional or hearing specialists. A medical professional or hearing specialists may suggest the patient use an auricular device such as a hearing aid, headphones, ear-bud, and/or the like. A typical hearing aid may not include several features desired by the patient and healthcare providers alike. For example, a typical hearing aid may not provide a patient with the ability to distinguish between an audio source (e.g., a speaker's voice, audio, and/or the like) and background noise. Additionally, the patient may desire that the hearing aid also monitor one or more physiological parameters of the patient and automatically report such information to the patient or medical professionals. Consequently, a typical hearing aid might not satisfy the needs of both patients and healthcare providers alike.
Accordingly, it may be desirable to provide a patient with an auricular device that may distinguish an audible signal form a target acoustic source (e.g., audio source), measure one or more physiological parameters of a user, and provide an indication of the one or more physiological parameters to the user and medical professionals alike.
An auricular device 100 can be of various structural configurations and/or can include various structural features that can aid mechanical securement to any of such portions of the ear 2 and/or other portions of the user 1 (e.g., on or near portions of a head of the user 1). In some implementations, an auricular device 100 can be similar or identical to and/or incorporate any of the features described with respect to any of the devices described and/or illustrated in U.S. Pat. No. 10,536,763, filed May 3, 2017, titled “Headphone Ventilation,” and/or can be similar or identical to and/or incorporate any of the features described with respect to any of the devices described and/or illustrated in U.S. Pat. No. 10,165,345, filed Jan. 4, 2017, titled “Headphones with Combined Ear-Cup and Ear-Bud,” each of which are incorporated by reference herein in their entireties and form part of the present disclosure. In some implementations, auricular device 100 can be similar or identical to any of the devices described in U.S. Pat. No. 10,536,763 and/or U.S. Pat. No. 10,165,345 and also includes one or more of the features described with reference to
a. Example Aspects Related to Controller for an Auricular Device
A processor 102 can be configured, among other things, to process data, execute instructions to perform one or more functions, and/or control the operation of an auricular device 100. For example, a processor 102 can process physiological data obtained from an auricular device 100 and can execute instructions to perform functions related to storing and/or transmitting such physiological data. For example, a processor 102 can process data received from one or more sensors of an auricular device 100, such as any or all of oximetry sensor 112, accelerometer 114, gyroscope 116, temperature sensor(s) 118, and/or any other sensor(s) 120 of the auricular device 100. A processor 102 can execute instructions to perform functions related to storing and/or transmitting any or all of such received data.
In some implementations, an auricular device 100 can be configured to adjust a sized and/or shape of a portion of the auricular device 100 to secure to an ear of a user. In some implementations, an auricular device 100 can include an ear canal portion configured to fit and/or secure within at least a portion of an ear canal of a user when the auricular device 100 is in use. In such implementations, an auricular device 100 can be configured to adjust a size and/or shape of such ear canal portion to secure within the user's ear canal. Such adjustment can be by inflating a portion of the ear canal portion, for example, or via an alternative mechanical means. In some implementations, an auricular device 100 includes an ear bud configured to fit and/or secure within the ear canal of a user, and in such implementations, the auricular device 100 can be configured to inflate the ear bud (or a portion thereof) to adjust a size and/or shape of the ear bud. In some implementations, an auricular device 100 includes an air intake and an air pump coupled to an inflatable portion of the auricular device 100 (e.g., of an ear bud) and configured to cause inflation in such manner.
A storage device 104 can include one or more memory devices that store data, including without limitation, dynamic and/or static random-access memory (RAM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), and the like. Such stored data can be processed and/or unprocessed physiological data or other types of data (e.g., motion and/or location data) obtained from an auricular device 100, for example. In some implementations, the storage device 104 can store information indicative and/or related to one or more users. For example, in some implementations of an auricular device 100 that are configured to cause inflation of a portion of the auricular device 100 within a user's ear canal as discussed above, the storage device 104 can store information related to a user inflation profile that can be utilized by the auricular device 100 to cause adjustment of a size and/or shape of such inflatable portion within the user's ear to a certain amount. In some implementations, as discussed elsewhere herein, an auricular device 100 can be configured to store information regarding one or more hearing aid profiles of users, and such information can be stored in storage device 104.
A communication module 106 can facilitate communicate (via wires and/or wireless connection) between an auricular device 100 (and/or components thereof) and external devices. For example, the communication module 106 can be configured to allow an auricular device 100 to wirelessly communicate with other devices, systems, and/or networks over any of a variety of communication protocols. A communication module 106 can be configured to use any of a variety of wireless communication protocols, such as Wi-Fi (802.11x), Bluetooth®, ZigBee®, Z-wave®, cellular telephony, infrared, near-field communications (NFC), RFID, satellite transmission, proprietary protocols, combinations of the same, and the like. A communication module 106 can allow data and/or instructions to be transmitted and/or received to and/or from an auricular device 100 and separate computing devices. A communication module 106 can be configured to transmit (e.g., wirelessly) processed and/or unprocessed physiological or other information to an external device (e.g., a separate computing device, a patient monitor, a mobile device (e.g., an iOS or Android enabled smartphone, tablet, laptop), a desktop computer, a server or other computing or processing device for display and/or further processing, and/or the like). Such separate computing devices can be configured to store and/or further process the received physiological and/or other information, to display information indicative of or derived from the received information, and/or to transmit information—including displays, alarms, alerts, and notifications—to various other types of computing devices and/or systems that may be associated with a hospital, a caregiver (e.g., a primary care provider), and/or a user (e.g., an employer, a school, friends, family) that have permission to access the user's data. As another example, the communication module 106 of an auricular device 100 can be configured to wirelessly transmit processed and/or unprocessed obtained physiological information and/or other information (e.g., motion and/or location data) to a mobile phone which can include one or more hardware processors configured to execute an application that generates a graphical user interface displaying information representative of the processed or unprocessed physiological and/or other information obtained from the auricular device 100. A communication module 106 can be and/or include a wireless transceiver.
In some implementations, an auricular device 100 includes an information element 108. An information element 108 can be a memory storage element that stores, in non-volatile memory, information used to help maintain a standard of quality associated with an auricular device 100. Illustratively, the information element 108 can store information regarding whether an auricular device 100 has been previously activated and/or whether the auricular device 100 has been previously operational for a prolonged period of time, such as, for example, one, two, three, four, five, six, seven or eight or more hours. Information stored in the information element 108 can be used to help detect improper re-use of an auricular device 100, for example.
With continued reference to
b. Example Aspects Related to Physiological Sensors for an Auricular Device
An auricular device 100 can include various sensors for determination of physiological parameters and/or for generating signals responsive to physiological characteristics of a user. For example, as shown in
An oximetry sensor 112 (which may also be referred to as an “optical sensor”) can include one or more emitters and one or more detectors for obtaining physiological information indicative of one or more blood parameters of a user. These parameters can include various blood analytes such as oxygen, carbon monoxide, methemoglobin, total hemoglobin, glucose, proteins, glucose, lipids, a percentage thereof (e.g., concentration or saturation), and the like. An oximetry sensor 112 can also be used to obtain a photoplethysmograph, a measure of plethysmograph variability, pulse rate, a measure of blood perfusion, and the like. Information such as oxygen saturation (SpO2), pulse rate, a plethysmograph waveform, respiratory effort index (REI), acoustic respiration rate (RRa), EEG, ECG, pulse arrival time (PAT), perfusion index (PI), pleth variability index (PVI), methemoglobin (MetHb), carboxyhemoglobin (CoHb), total hemoglobin (tHb), glucose, can be obtained from oximetry sensor 112 and data related to such information can be transmitted by an auricular device 100 (e.g., via communication module 106) to an external device (e.g., a separate computing device, a patient monitor, and/or mobile phone). An auricular device 100 can be configured to operably position the oximetry sensor 112 (e.g., emitter(s) and/or detector(s) thereof) proximate and/or in contact with various portions of an ear of a user when the auricular device 100 is secured to the ear, including but not limited to, a pinna, a concha, an ear canal, a tragus, an antitragus, a helix, an antihelix, and/or another portion of the ear.
An auricular device 100 can include one or more temperature sensors 118. For example, an auricular device 100 can include one or more (such as one, two, three, four, five, six, seven or eight or more temperature sensors 118) that are configured to determine temperature values of the user and/or that are configured to generate and/or transmit signal(s) based on detected thermal energy of the user to processor 102 for determination of temperature value(s). An auricular device 100 can be configured to operably position the temperature sensors 118 proximate and/or in contact with various portions of an ear of a user when the auricular device 100 is secured to the ear, including but not limited to, a pinna, a concha, an ear canal, a tragus, an antitragus, a helix, an antihelix, and/or another portion of the ear. As an alternative or as an addition to such temperature sensor(s) 118 configured to determine body temperature values and/or to generate signal(s) responsive to thermal energy to processor 102 for temperature determination, an auricular device 100 can include one or more additional temperature sensors for measuring ambient temperature. For example, an auricular device 100 can include one or more temperature sensors 118 for determining temperature values of the user and one or more temperature sensors 118 for determining ambient temperature. In some implementations, an auricular device 100 (e.g., via processor 102) can determine a modified, adjusted temperature value(s) of the user based on (e.g., comparisons) of data received from both types of temperature sensors. In some implementations, an auricular device 100 includes one or more temperature sensors configured to be positioned proximate and/or in contact with portions of the user's ear when the auricular device 100 is secured thereto (which may be referred to as “skin” temperature sensors) and also one or more temperature sensors configured to be positioned away from and/or to face away from skin of the user when the device 100 is secured to the ear for determining ambient temperature (which may be referred to as “ambient” temperature sensors).
As another example, in some implementations, an auricular device 100 includes one or more of such ambient temperature sensors which are operably positioned at or near a side or surface of the auricular device 100 that faces away from the user, for example, away from skin and/or ear of the user, and/or away from any portion of the ear such as those discussed herein. As discussed below, a portion of an auricular device 100 can be configured to be positioned and/or secured within an ear canal of the user when the auricular device 100 is in use, and in such implementations, the auricular device 100 can include one or more temperature sensors on such portion.
c. Example Aspects Related to Motion Sensors for an Auricular Device
With reference to
An auricular device 100 can include at least one inertial measurement unit (herein “IMU”) for measuring motion, orientation, and/or location of a user (e.g., one or more of a combination of accelerometer 114 and/or gyroscope 116). An IMU can be configured to determine motion, orientation, position and/or location of a user. A processor 102 may be configured to received motion, orientation, position, and/or location data of a user from at least one IMU. Additionally, a processor 102 may determine motion, orientation, position, and/or location of a user based on data received from at least one IMU. For example, an auricular device 100 can include an IMU that can measure static and/or dynamic acceleration forces and/or angular velocity. By measuring static and/or dynamic acceleration forces and/or angular velocity, an IMU can be used to calculate movement and/or relative position of auricular device 100. An IMU can include one or more, and/or a combination of, for example, an AC-response accelerometer (e.g., a charge mode piezoelectric accelerometer and/or a voltage mode piezoelectric accelerometer), a DC-response accelerometer (e.g., capacitive accelerometer, piezoresistive accelerometer), a microelectromechanical system (MEMS) gyroscope, a hemispherical resonator gyroscope (HRG), vibrating structure gyroscope (VSG), a dynamically tuned gyroscope (DTG), fiber optic gyroscope (FOG), a ring laser gyroscope (RLG), and the like. An IMU can measure acceleration forces and/or angular velocity forces in one-dimension, two-dimensions, or three-dimensions. With calculated position and movement data, users 1 of auricular device 100 and/or others (e.g., care providers) may be able to map the positions or movement vectors of an auricular device 100. Any number of IMU's can be used to collect sufficient data to determine position and/or movement of an auricular device 100. An auricular device 100 can be configured to determine and/or keep track of steps and/or distance traveled by a user based on data from at least one IMU (e.g., one or more of a combination of accelerometer 114, gyroscope 116).
Incorporating at least one IMU (e.g., one or more of a combination of accelerometer 114 and/or gyroscope 116) in an auricular device 100 can provide a number of benefits. For example, an auricular device 100 can be configured such that, when motion is detected (e.g., by a processor 102) above a threshold value, the auricular device 100 stops determining and/or transmitting physiological parameters. As another example, an auricular device 100 can be configured such that, when motion is detected above and/or below a threshold value, the oximetry sensor 112 and/or temperature sensors 118 are not in operation and/or physiological parameters based on oximetry sensors 112 and/or temperature sensors 118 are not determined, for example, until motion of the user falls below such threshold value. This can advantageously reduce or prevent noise, inaccurate, and/or misrepresentative physiological data from being processed, transmitted, and/or relied upon. Additionally, an auricular device 100 can be configured such that, when motion is detected (e.g., via processor 102) above a threshold value, the auricular device 100 begins determining and/or transmitting physiological parameters.
Some implementations of auricular device 100 can interact and/or be utilized with any of the physiological sensors and/or systems of
d. Example Aspects Related to Other Sensors for an Auricular Device
With continued reference to
e. Example Aspects Related to Audio Components for an Auricular Device
An auricular device 100 can include various software and/or hardware components to allow the auricular device 100 to improve hearing of a user and/or function as a hearing aid. For example, as shown in
Speakers 124 can be configured to output sound into and/or toward the user's ear. Speakers 124 can be operably positioned by an auricular device 100 in a variety of locations, for example, on a portion or portions of the auricular device 100 that face toward the user when the auricular device 100 is in use. For example, speakers 124 can be operably positioned by an auricular device 100 to direct output sound within and/or toward the ear canal of the user. In some implementations, speakers 124 can be positioned on and/or along an ear canal portion of an auricular device 100 that is positioned within the user's ear canal when the auricular device 100 is in use.
f. Example Aspects of Operating Mode(s) for an Auricular Device
In some implementations, an auricular device 100 can be configured (e.g., via processor 102) to modify one or more characteristics of ambient sound detected by the one or more microphones 122. For example, an auricular device 100 can be configured to modify one or more frequencies of ambient sound detected by the microphone(s) 122. For example, an auricular device 100 can be configured to modify one or more frequencies associated with sound detected by the microphones 122 and can communicate such modified frequencies to speakers 124 for outputting to the user. This can be significantly advantageous for many persons suffering from hearing impairments who are unable to hear certain frequencies and/or frequency ranges of sound. In some implementations, a processor 102 can include a frequency adjustment module configured to carry out a frequency modification. As discussed elsewhere herein, an auricular device 100 can be configured to communicate (for example wirelessly communicate) with external devices. In some implementations, an auricular device 100 (e.g., via processor 102) can be configured to determine and output text to such external devices based on a sound detected by the microphones 122. In some examples, an auricular device 100 (e.g., via processor 102) is configured modify one or more characteristics of ambient sound detected by microphones 122 based upon a hearing profile of a user. An auricular device 100 can be configured to store one or more hearing profiles (e.g., each hearing profile associated with a particular user) in storage device 104 of an auricular device 100. Alternatively or additionally, an auricular device 100 can be configured to receive (e.g., wirelessly receive) one or more hearing profiles from an external device. For example, one or more hardware processors of such external device can execute an application (e.g., software application, web or mobile application, etc.) that can execute commands to enable the separate computing device to transmit a hearing profile to an auricular device 100 for use by the auricular device 100 and/or to instruct the auricular device 100 to employ the hearing profile to carry out modification of one or more characteristics of detected sound for the user (e.g., frequency modification).
In some cases, an auricular device 100 is configured to amplify sound that is detected by the microphones 122 prior to outputting by speakers 124. For example, in some implementations, an auricular device 100 can include an amplifier configured to amplify (e.g., increase a strength of) sound detected by the microphones 122 and/or amplify one or more signals generated by the microphones 122 based upon detected sound. In some implementations, a processor 102 can be configured to convert sound detected by the microphones 122 into digital signals, for example, before processing and/or before transmission to speakers 124.
In some implementations, an auricular device 100 can be configured to receive audio data (e.g., electronic data representing sound) from an external device and emit audio (e.g., via speakers 124) based on the received audio data. In such configurations, an auricular device 100 can function as an audio playback device. An auricular device 100 can include various software and/or hardware components to allow the auricular device 100 to carry out such audio functions. In some cases, an auricular device 100 is configured to provide noise cancellation to block out ambient sounds when the auricular device 100 is facilitating audio playback.
An auricular device 100 can be configured to operate in various modes. For example, an auricular device 100 can be configured to operate in at least one of a music or audio playback mode, a hearing aid mode, a noise cancelling mode, and/or a mute mode. While operating in a music or audio playback mode processor 102 may facilitate emission of sound to the user's ear via speakers 124 based on received audio data from an external device. During a hearing aid mode, processor 102 can modify one or more characteristics of ambient sound detected by the microphones 122 as described above (e.g., by a beam pattern or any other method such as by an audiogram). An auricular device 100 can be further configured to operate in a noise canceling mode, wherein processor 102 is configured to determine and/or cancel ambient noise in an environment. An auricular device 100 may be configured to operate in a mute mode, wherein the microphone and/or speaker may be disabled during operation. In some cases, an auricular device 100 can be configured to operate in only one of such modes and/or be ca configured to switch between these modes. As discussed elsewhere herein, an auricular device 100 can be configured to communicate (for example wirelessly communicate) with an external device. In some implementations, an auricular device 100 can be configured for communication with an external device that is configured to execute an application (e.g., software application, web or mobile application, etc.) that can execute commands to enable the separate computing device to instruct the auricular device 100 to employ one of a plurality of modes of the auricular device 100 (e.g., the audio playback mode, hearing aid mode, noise cancelling mode, and/or mute mode).
An auricular device 100 can be configured to operate in a differential audio playback mode. Differential audio playback can be used, for example during noise cancellation, in spatial audio applications, to enhance overall audio quality, and/or to cancel common-mode signals. Differential audio payback can be implemented by an auricular device 100 and/or a case 200. In some examples, a microphone array including microphones 122 can generate audio data responsive to detecting audio and can transmit the audio data to a processor 102. Processor 102 can process audio data originating from microphones 122. Processor 102 can separate portions of audio data corresponding to audio sources. For example, processor 102 can separate a portion of an audio signal corresponding to a person talking from other portions of an audio signal corresponding to ambient noise. In some implementations, the processor 102 can separate portions of audio data to correspond to various people's voices such as by implementing voice recognition. For example, the processor 102 can separate a first portion of audio data corresponding to a first person's voice and can separate a second portion of audio data corresponding to a second person's voice. The processors 102 may implement machine learning (e.g., a neural network) to process audio data from the microphones 122 and/or to process separate portions of the audio data based on an audio source. In some implementations, the processors 102 can separate a user's own voice from an audio source. In some implementations, the processor 102 can separate portions of the audio based on whether the associated audio source is near (e.g., near-field audio source) or far (e.g., far-field audio source). The processor 102 can apply different signal processing to the various portions of audio. For example, the processor 102 can suppress, subtract, cancel, etc. a portion of audio, such as a portion of the audio corresponding to ambient noise or a portion of audio corresponding to a person talking that is not of interest to the user (e.g., a stranger). As another example, the processor 102 can amplify a portion of audio, such as a portion of audio corresponding to a far-field audio source which may be quiet or corresponding to a person talking who is of interest to the user (e.g., a relative of a user, an orator, etc.). A processor 102 can synchronize and/or align the original audio and the determined differential signal. Additionally a processor 102 can combine synchronized audio data and/or the differential audio data before transmitting audio data to one or more speakers 124. In some examples, the differential audio data and the audio data are combined to optimize an overall audio quality by either reducing an impact of external noise and/or enhancing specific audio features. In some examples, a processor 102 can recognize one or more voices from an audio source(e.g., voice recognition) and enhance audio associated with the audio source via a differential audio playback mode. In some examples, a processor 102 can identify and cancel noise based on received audio in a differential audio playback mode.
In some examples, processor 102 can determine an approximate distance between a user of an auricular device 100 and an audio source (e.g., determine whether audio source is near-field or far-field). For example, processor 102 can determine a distance from audio data originating from microphones 122 responsive to audio based on arrival time of the audio detected at the microphones 122 (e.g., a difference in arrival time at various microphones within an array), and/or a difference in volume of the audio detected at the microphones 122.
In some implementations, an auricular device 100 can be similar or identical to and/or incorporate any of the features and/or sensors described with respect to any of the devices described and/or illustrated in U.S. Pub. No. 2022/0070604, published Mar. 3, 2022, titled “Audio Equalization Metadata,” which is incorporated by reference herein its entirety and forms part of the present disclosure.
g. Example Aspects Relating to a Beamforming Filter Unit for an Auricular Device
An auricular device 100 can include one or more beamformer filter units 128. A beamformer filter unit 128 can be for example, software, hardware, and/or a combination thereof. A beamformer filter unit 128 can filter noise while maximizing the sensitivity of an audible signal from a target acoustic source, or spatially process a target acoustic source among a multitude of acoustic sources in a user's environment. A beamformer filter unit 128 can have an input electrically connected to an output of at least one transducer of an auricular device 100. Additionally, the beamformer filter units 128 can have an output electrically connected to an input of a processor 102 of an auricular device 100. In some implementations, the auricular device may be configured with a beamformer filter unit 128, although it will be understood that beam forming may be accomplished by one, two, or more beamformer filter units 128 and/or processor 102.
In an example implementation, a beamformer filter unit 128 may be configured to transmit a beamformed signal to a processor 102 based on at least a first input signal from a first transducer, and/or a second input signal from a second transducer. The beamformer filter units 128 may be configured to transmit a beamformed signal to a processor 102 based on a plurality of input signals from a plurality of transducers (an array of transducers). A transducer can be, for example, microphones 122. Microphones 122 can be positioned to form a microphone 122 array, wherein the beamformer filter unit 128 receives a signal from the microphone 122 array, generates a beamformed signal, and transmits the beamformed signal to a processor 102. Moreover, a processor 102 may transmit the beamformed signal to speakers 124.
A target acoustic source of a beamformed signal can originate from any direction with respect to the user, for example, towards a user's mouth, towards a communication partner in front of a user, or behind a user. Determining a directionality for a beamformed filter can be accomplished by any of several means, for example, estimating a phase difference for the signals from each of a plurality of microphones 122. In some examples, a beamformer filter unit 128 can be configured to determine directionality of a signal based on an adaptive beamforming configuration as described below. Additionally, a beamformer filter unit 128 may process an audible signal in the time domain or in the frequency domain, or partially in the time domain and/or partially in the frequency domain. It should be appreciated that one skilled in the art can identify any one of a means of determining the target acoustic source.
In an example implementation, the input transducers can be at least microphones 122. Advantageously, the input transducers can be an array of microphones 122 directionally adapted to enhance the target acoustic source among a multitude of acoustic sources in the local environment of the user wearing an auricular device 100. Additionally, and/or alternatively, the array of microphones 122 can be located at one or more of the following locations: affixed to a housing of an auricular device 100, elsewhere on the body of a user, and/or at any other predetermined location. In another implementation, a processor 102 can receive an input from the array of microphones 122 and implement functions of a beamformer filter unit 128 such that the processor 102 enhances the sensitivity of an audible signal from a target acoustic source.
In some examples, a beamformer filter unit 128 can be used by a processor 102 in conjunction with at least one or more physiological sensors of an auricular device 100 to determine and suppress and/or emphasize a target acoustic source. In an example implementation, a processor 102 obtains motion data of a user from motion sensors (e.g., accelerometers 114 and/or gyroscopes 116) of an auricular device 100 and estimates the user's orientation to determine a 3D position of sound. A beamformer filter unit 128 can be configured to emphasize audio from the determined 3D position, and suppress audio from all other positions, such that the user hears sound from the determined 3D position, and noise from other positions may be reduced. Additionally, a processor 102 may transmit motion data to a beamformer filter unit 128 such that audio from a target acoustic source and audio from ambient noise is modified accordingly. Advantageously, an auricular device 100 can change the target acoustic source with respect to the user's orientation such that, for example, a target acoustic source remains constant as the user changes orientation.
Advantageously, an auricular device 100 can be wirelessly connected to an external device 300 as described in
Example beam patterns are illustrated as part of
In an example implementation, a processor 102 can receive a predetermined beam pattern, wherein the beam pattern is a dynamic beam pattern which may cause the beamformer filter unit 128 beam to vary based on at least on one input from the one or more sensors of an auricular device 100. For example, an IMU may cause the beam pattern to vary based on the detected movement of the user wearing an auricular device 100.
In one example of a dynamic beam patter, input from the IMU may cause the beam to appear “fixed” (e.g., static) as a user rotates. Advantageously, the target acoustic source may originate from the same location while the user changes orientation. In another implementation, the beam pattern of the beamformer filter unit 128 may be reduced depending on the user's orientation. For example, the beam pattern may be reduced if the IMU detect that the user leans forward.
In an additional implementation, the storage device 104 may receive from a processor 102 or an external device, a predefined beam pattern. The one or more predefined beam patterns may be retrieved and implemented by the beamformer filter unit 128. Additionally, the beamformer filter unit 128 may transmit to the storage device 104, the current beam pattern. A beamformer filter unit 128 may receive a request from at least a processor 102 or an external device, to transmit the current beam pattern to the storage device 104.
h. Example Aspects Related to An Illuminating Indicator for an Auricular Device
With continued reference to
An indicator 126 can provide, for example, users with vital information regarding at least a status of an auricular device 100 and/or the present condition of the user based on the one or more physiological parameters determined by the auricular device 100. For example, the auricular device can include a processor 102 configured to determine the status of an auricular device 100 and/or a condition of a user and provide a visual representation of the status of the auricular device 100 other users, by changing at least one of the output characteristics of the indicator 126.
In an example implementation, a processor 102 may determine a state of an auricular device 100 including: whether the power source 110 of the auricular device 100 decreased below a threshold value, whether one or more of the sensors (e.g., the oximetry sensor 112, temperature sensor 118, or other sensors 120) has failed, whether a physiological parameter of the user has met or exceeded a threshold, or any other sensor disclosed herein, whether the user may be sourcing hearing data from an external device or an ambient source, and/or whether to communicate information with an external device (such as a healthcare monitoring device and/or smartphone). Additionally, a processor 102 may associate one or more of the following illumination characteristics of indicator 126 with at least one status: changing the indicator 126 from one of a plurality of colors to a different color, changing the strobe (or frequency) of indicator 126, changing the pulse duration (e.g., a duty cycle) of indicator 126, and/or changing the intensity of the indicator 126. A processor 102 can combine one or more of the illumination characteristics to represent one or more statues, for example, an indicator 126 having distinct color, a determined pulse, a duty cycle, and/or intensity. A pulsed output can be caused by a processor 102, resulting in the indicator 126 illuminating based on a determined frequency, such as 0.1 Hz, 0.5 Hz, 1 Hz, 2 Hz or any other frequency. A duty cycle can be used in combination with a pulsed output, for example, the duty cycle of the indicator can be at or about 25% for a 1 Hz signal, thereby the indicator 126 may be illuminated for approximately 0.25 seconds between periods of approximately 0.75 seconds where the indicator 126 may not be illuminated. A processor 102 may cause indicator 126 to illuminate at a given intensity. In an example implementation, a processor 102 may cause the indicator 126 to illuminate at approximately half the intensity rating of the indicator 126. In another example implementation, a processor 102 may cause the indicator 126 to illuminate at the full intensity rating of the indicator 126. The intensity rating of an indicator 126 can be determined based on, for example, the indicator 126 maximum lumens (e.g., a candela) output. Additionally, and/or alternatively, the intensity of the indicator 126 can be based on a measured electrical characteristic (e.g., a voltage and/or a current) supplied to the indicator 126 to cause the indicator to illuminate.
In an example implementation, a processor 102 can be configured to cause the indicator 126 to emit a first colored output (e.g., green) when the processor 102 determines an auricular device 100 does not have a fault condition (e.g., normal operation). A processor 102 can be further configured to cause an indicator 126 to emit a second colored output (e.g., red) when the processor 102 determines that an auricular device 100 has a fault condition (e.g., low power source voltage, one or more sensors malfunctions has occurred). In another example implementation, an auricular device 100 can have more than one indicator 126 (e.g., two, three, four, five, and/or more). A processor 102 may associate a status with at least one indicator 126 output. Alternatively, a processor 102 may cause an indicator 126 to change output characteristics based on a determined status of an auricular device 100. Additionally, and/or alternatively, a processor 102 can cause an indicator 126 to emit a coordinated output such that, for example a fault condition can be expressed by more than one indicator 126.
Additionally, a processor 102 may be configured to cause an indicator 126 to emit light based on one or more operating modes of an auricular device 100. An emitted light for one or more operating modes can include at least one and/or a combination of illumination characteristics (e.g., a distinct color, pulse, duty cycle, and/or intensity). an operating mode of an auricular device 100 can be any of the operating modes described herein (e.g., music or audio playback mode and/or hearing aid mode) and/or any additional operating modes as configured by, for example, a processor 102 of the auricular device 100. A processor 102 may be configured to cause an indicator 126 to emit light based on a change from one or more first operating modes to one or more second operating modes.
Advantageously, an auricular device 100 having with an indicator 126, wherein the auricular device 100 can be configured to emit light based on one or more operating modes, can notify those in the presence of the user of the current operating mode of the auricular device 100. Someone in view of an indicator 126 may correlate one or more illumination characteristics with one or more operating modes of the auricular device, thereby a person in the presence of the user may be able to determine the operating mode of the auricular device 100 without having to interrupt the user. For example, those in the presence of the user may see the indicator 126 and determine that the user may be operating an auricular device 100 in a hearing aid mode, thereby the user can hear a person speaking in an ambient environment. Having an auricular device 100 configured with an indicator to emit light based on one or more operating modes can be used to notify those in the presence of the user that the user may be sourcing audio data from an alternative source (music or audio playback mode), such as for example, listening to music sourced from the storage device 104 and/or sourced from an external device. Hence, those in the presence of the user may see the indicator 126 and determine that the user may not be able to hear speech in an ambient environment.
In an example implementation, a processor 102 may be configured to cause the indicator 126 to emit light while the auricular device is operated in a hearing aid mode such that persons near the user can see the emitted light. The emitted light can have one or more of the illumination characteristics as discussed herein. In one implementation, the illumination characteristics for operating in an audio playback mode can be, for example, a green light a about a 1 Hz pulse, having approximately a 50% duty cycle. As discussed herein, a processor 102, operating in a hearing aid mode may cause speakers 124 to emit sound based on received audio data from microphone 122 such that a user hears speech and audio from the user's ambient environment (e.g., from a presenter during a meeting, or from someone having a conversation with the user).
In another example implementation, a processor 102 may be configured to cause the indicator 126 to emit light while the auricular device is operated in a music or audio playback mode such that persons near the user can see the emitted light. The emitted light can have one or more of the illumination characteristics disclosed herein. In one implementation, the illumination characteristics for operating in an audio playback mode can be, for example, a blue light, at about a 2 Hz pulse, at or about a 75% duty cycle. A processor 102, operating in the music or audio playback mode may cause the speakers 124 to emit sound based on audio data sourced from, for example, storage device 104 and/or sourced from an external device. In some examples, a processor 102 can cause an indicator 126 to emit one or more illumination characteristics when the user of an auricular device 100 does not wish to be disturbed.
A processor 102 can be configured to cause indicator 126 to illuminate according to one or more illumination characteristics (e.g., a distinct color, frequency, duty cycle, and/or intensity as disclosed above), to indicate that an auricular device 100 is wirelessly connected a communication channel. For example, indicator 126 can be illuminated when an auricular device 100 is connected to a common audio source (e.g., multiple auricular devices are all receiving audio from a common communication channel, and thus all users hear the same sound). In some examples, a processor 102 can cause indicator 126 to illuminate according to one or more illumination characteristics disclosed above, when an auricular device 100 is not connected to a common source (e.g., such that an auricular device 100 can visually indicate that the user is not connected to a common communication channel). In one implementation, a processor 102 can cause indicator 126 to emit a green light when an auricular device 100 is connected to a common audio channel, and/or cause indicator 126 to emit a red light when the auricular device is not connected to a common audio channel. In some examples, a user can quickly identify one or more additional users that are connected to a common audio channel and thus, determine which users are receiving the same audio input from auricular device 100. A processor 102 may be configured to cause a plurality of indicators 126 to emit one or more colors and/or one or more illumination characteristics, after determining whether an auricular device 100 is connected to a common communication channel. In some examples, a processor 102 can cause indicator 126 to illuminate according a first illumination characteristic when an auricular device 100 is connected to a first communication channel, and cause indicator 126 to illuminate according to a second illumination characteristic when the auricular device 100 is connected to a second communication channel (e.g., to visually indicate which communication channel an auricular device 100 is connected to).
In an additional example implementation, the auricular device can be configured with more than one indicator 126, wherein a processor 102 is further configured to activate the more than one indicator 126 with, for example, a high intensity pulsing red output if the processor 102 determines that the patient may be suffering from a health condition (e.g., the blood oxygen saturation level of the patient may be below a threshold value as measured by the oximetry sensor 112, the body temperature of the patient may be below a threshold value as measured by the temperature sensor 118, or that the patient may be suffering from a fall as determined by a received audible signal from the microphones 122 and or the IMU).
As another example implementation, a case 200 can be configured with, among other components, processor 202, communication module 206, microphone(s) 222, and/or a beamformer filter unit 228, to receive audio, determine a directionality of the audio, apply spatial processing, and/or transmit the audio to an auricular device 100. In some examples, case 200 can be configured to perform one or more tasks associated with a triangulated beamformer as described below. As another example, case 200 can be configured to emit audio via one or more speaker(s) 224 in response to a received audio data from, for example, an auricular device 100 and/or an external device. In a further example, case 200 can automatically pair with an auricular device 100 once the auricular device is separated from the case 200.
In addition to the example functionality as described above, case 200 can, via power source 210, provide an auricular device 100 with a charging capability. In some examples, case 200 can be a portable charger for an auricular device 100. For example, a case 200 may automatically charge an auricular device 100 after the auricular device 100 is placed inside and/or near the case 200. In some example implementations, case 200 may and/or may not be configured to carry auricular device 100. In some examples, case 200 can be another device such as a podium, a desktop microphone, and/or another type of external device not generally worn by a user. Additional implementations and/or configurations of an auricular device 100, a case 200 and/or external devices are described below with reference to
Although the present disclosure may describe implementations and/or use of an auricular device 100 within the context of an ear of the user, it is to be understood that two of such auricular devices 100 can be secured to two ears of the user (one per each ear) and can each be utilized to carry out any of the functions and/or operations described herein with respect to auricular device 100. By way of non-limiting example, while
Any auricular device 100 described herein and/or components and/or features of the auricular devices described herein can be integrated into a wearable device that secures to another portion of a user's body. For example, any of the components and/or features of the auricular devices described herein can be integrated into a wearable device that can be secured to a head, chest, neck, leg, ankle, wrist, or another portion of the body. As another example, any of the components and/or features of the auricular devices described herein can be integrated into glasses and/or sunglasses that a user can wear. As another example, any of the components and/or features of the auricular devices described herein can be integrated into a device (e.g., a band) that that a user can wear around their neck.
In some implementations, an auricular device 100 can be utilized to monitor characteristic(s) and/or quality of sleep of a user. As discussed elsewhere herein, an auricular device 100 can be configured to communicate (for example wirelessly communicate) with external devices (e.g., such as a case 200, external device 300 and/or watch 302 of
A network 101 can include any one or more communications networks, such as the Internet. A network 101 may be any combination of local area network and/or a wireless area network or the like. Accordingly, various components of the computing environment of
In some implementations, an auricular device 100 can be similar or identical to and/or incorporate any of the features and/or sensors described with respect to any of the devices described and/or illustrated in U.S. Pub. No. 2021/0383011, published Dec. 9, 2021, titled “Headphones with Timing Capability and Enhanced Security,” which is incorporated by reference herein its entirety and forms part of the present disclosure.
In some implementations, a case 200 configured with a beamformer filter unit 228 as described herein can perform, execute, and/or be configured as described above with reference to an auricular device 100. For example, a case 200 can be configured with a beamformer filter unit 228 as described above can have one or more beam patterns and/or receive and modify audio data according to a beam pattern, as described herein. In some examples, the processor 102 can determine a beam pattern based on a detected sound (e.g., an indicator). In some examples, a processor 102 can, in response to a detected sound, apply one or more beam patters as disclosed herein.
As illustrated in
For example, case 200 can be configured with a microphone array, wherein the microphone array is used to determine a distance D2 from an audio source 502. In some examples, case 200 can determine a distance D2 based on, for example, the amplitude of audio 503B received by a microphone array. In some examples, a case 200 can determine distance D2 (e.g., via processor 202) based on a comparison between an amplitude of audio 503B received at a first microphone and a second microphone in a microphone array. Distance D2 can be, for example any distance (e.g., 1, 5, 10, 20 or more meters).
Additionally and/or advantageously, case 200 can be configured with an array of microphones can determine orientation information relative to audio 503B, based on an adaptive beamformer applied to audio 503B. Orientation information can include 2D orientation information (e.g., a heading such as approximately 0 degrees North) and/or 3D orientation information (e.g., rotational and/or translational degrees of freedom). In some implementations, a case 200 can receive orientation information from an auricular device 100, an external device, and/or from gyroscopes 216 and/or accelerometer(s) 214 as described herein (e.g., by using an IMU and/or the like).
An auricular device 100 can be configured with a microphone array, wherein the microphone array is used to determine a distance D1 from an audio source 502. In some examples, an auricular device 100 can determine a distance D1 based on, for example, the amplitude of audio 503A received by a microphone array. In some examples, an auricular device can determine distance D1 (e.g., via processor 102) based on a comparison between an amplitude of audio 503A received at a first microphone and a second microphone in a microphone array. Distance D1 can be, for example any distance (e.g., 1, 5, 10, 20 or more meters).
Additionally and/or advantageously, an auricular device 100 (e.g., via processor 102) can determine orientation information of the auricular device 100 relative to the audio 503A (e.g., orientation information of the user 501 relative to the audio 503A). Orientation information can include 2D orientation information (e.g., a heading such as approximately 0 degrees North) and/or 3D orientation information (e.g., rotational and/or translational degrees of freedom). In some examples, an auricular device 100 configured with an array of microphones can determine orientation information based on an adaptive beamformer applied to audio 503A. In other implementations, an auricular device 100 can receive orientation information from a case 200, an external device, and/or from gyroscopes 116 and/or accelerometer(s) 114 as described herein (e.g., by using an IMU).
An auricular device 100 and/or case 200 can further determine an approximate angle A1 between the auricular device 100 and a case 200 from the position of the audio source 502. In some examples, an auricular device 100 can determine an angle A1 based on: audio 503A, a determined distance D1, orientation information of an auricular device 100, received audio 503B from a case 200, a distance D2, and/or orientation information of a case 200. In some examples, an angle A1 can be, for example, 1, 5, 10, 15, 25 and/or more degrees.
In some examples, the angle A1, distance D1 and D2, and/or orientation information can be used to spatially process audio 503A and or 503B. For example, audio 503B, received by a case 200 can be transmitted, along with one or more characteristics of an acoustic environment 500A, to an auricular device 100. The auricular device 100 can process the audio 503B and/or audio 503A (e.g., spatially process audio 503A and/or 503B based on D1, D2, A1, and/or orientation information). The auricular device 100 can modify audio 503A and/or 503B and provide a modified audio 503C of
Additionally and/or optionally, acoustic characteristics (e.g., distance D1, D2, angle A1, and/or orientation information of the auricular device 100 and/or case 200) can be transmitted to and/or determined by processor 102, 202, and/or an external device (e.g., external device of
In some examples, the accuracy of an estimated distance D1 and/or D2 can affect the quality of a spatially processed signal. For example, decreasing the actual distance between an auricular device 100 and/or case 200 and the audio source 502 can increase the accuracy of a distance estimate (e.g., D1 and/or D2) and the quality of a processed signal. For example, an amplitude difference between a first microphone and a second microphone in a microphone array associated with an auricular device 100 and/or case 200 can increase as the auricular device 100 and/or case 200 is moved closer to an audio source 502 resulting in a more accurate distance estimate. Conversely, the accuracy of an estimated distance D1 and/or D2, and the overall quality of a spatially processed signal can decrease as the actual distance between an auricular device 100 and/or case 200 and the audio source 502 increases.
As depicted, an acoustic environment 600 of
An auricular device 100 and/or case 200 can determine a distance D1, D2, D3, and/or D4 based on, for example, a received amplitude of audio 603A, 603B, 604A, and/or 604B respectively. In some examples, and auricular device 100 and/or a case 200 may be configured with a microphone array, and distance D1, D2, D3, and/or D4 may be determined based on a comparison between an amplitude of audio 603A, 603B, 604A, and/or 604B received at a first microphone and a second microphone in the microphone array. Distance D1, D2, D3, and/or D4 can be, for example, any distance (e.g., 1, 5, 10, 20 or more meters).
An auricular device 100 and/or a case 200 can determine one or more angles (e.g., A1, A2, A3, and/or A4 of
Additionally and/or advantageously, an auricular device 100 (e.g., via processor 102) can determine orientation information of the auricular device 100 relative to audio 603A and/or 603B (e.g., orientation information of the user 601 relative to audio 603A, 603B). Orientation information can include 2D orientation information (e.g., a heading such as approximately 0 degrees North) and/or 3D orientation information (e.g., rotational and/or translational degrees of freedom). In some examples, an auricular device 100 configured with an array of microphones can determine orientation information based on adaptive beamforming applied to audio 603A and/or 603B. In some implementations, an auricular device 100 can receive orientation information from a case 200, an external device, and/or from gyroscopes 116 and/or accelerometer(s) 114 as described herein (e.g., by using an IMU).
Optionally, a case 200 (e.g., via processor 202 and/or the like) can determine orientation information of the case 200 relative to audio 604A and/or 604B (e.g., relative to the direction of one or more audio sources 602A and/or 602B). Orientation information can include 2D orientation information (e.g., a heading such as approximately 0 degrees North) and/or 3D orientation information (e.g., rotational and/or translational degrees of freedom). In some examples, a case 200 can determine orientation information based on an adaptive beamformer applied to audio 604A and/or 604B. In some implementations, a case 200 can receive orientation information from an auricular device 100, an external device, and/or from gyroscopes 216 and/or accelerometer(s) 214 as described herein (e.g., by using an IMU and/or the like).
Advantageously, an auricular device 100 and/or a case 200 can be configured to determine one or more characteristics of an acoustic environment 600 (e.g., orientation information, one or more angles A1, A2, A3 and/or A4, and/or distance D1, D2, D3, and/or D4) by spatially processing audio (e.g., 603A, 603B, 604A, and/or 604B). In some examples, spatially processed audio data can be a stereo signal. In some examples, an auricular device 100 can transmit a spatially processed audio to a user 601 via speakers 124, such that the user 601 perceives the audio as if the user 601 was located at the same position and/or orientation as the case 200. For example, an auricular device 100 can spatially process audio to allow the user 601 to perceive audio as if the user 601 were in the middle of two audio sources.
At block 702, the auricular device 100 can receive a first audio data from an audio source. As described with reference to
At block 704, the auricular device 100 can determine orientation information and/or distance(s). For example, and with reference to
An auricular device 100 can determine and/or estimate a distance from an audio source 502 to the auricular device 100, based on a received audio 503A. For example, an auricular device 100 can be configured with a microphone array, wherein the auricular device (e.g., via processor 102) uses the microphone array to determine a distance between the auricular device 100 and an audio source 502. In some examples, a processor 102 can determine a distance based on a comparison between an amplitude of audio 503A received at a first microphone and at a second microphone of a microphone array.
At block 706, the auricular device 100 can receive orientation information, estimated distance(s), and/or a second audio data from an external device. With continued reference to
The auricular device 100 can receive estimated distance(s) from an external device. A processor 102 can receive a distance estimate from, for example, a case 200 (and/or another external device as described herein) positioned near an audio source 502. In some examples, a case 200 can be configured with a microphone array, wherein the microphone array is used to determine a distance from an audio source 502 to the case 200. In some examples, an external device (e.g., via processor 202 of case 200 and/or the like) can determine a distance estimate based on a comparison between an amplitude of audio 503B received at a first microphone and at a second microphone in a microphone array.
The auricular device 100 can further receive a second audio data from an external device. The audio data can be, for example, associated with audio 503B of
At block 708, the auricular device 100 can spatially process the first audio data and/or the second audio data. For example, an auricular device 100 of
In some examples, an auricular device 100 can spatially process a second audio data having a higher SNR than a first audio data as described above with reference to
At block 710, the auricular device 100 can transmit the spatially processed first and/or second audio data to a user. As described in
Although this invention has been disclosed in the context of certain preferred embodiments, it should be understood that certain advantages, features and aspects of the systems, devices, and methods may be realized in a variety of other embodiments. Additionally, it is contemplated that various aspects and features described herein can be practiced separately, combined together, or substituted for one another, and that a variety of combination and sub-combinations of the features and aspects can be made and still fall within the scope of the invention. Furthermore, the systems and devices described above need not include all of the modules and functions described in the preferred embodiments.
Conditional language used herein, such as, among others, “can,” “could,” “might,” “may” “e.g.,” and the like, unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain features, elements, and/or steps are optional. Thus, such conditional language is not generally intended to imply that features, elements, and/or steps are in any way required or that one or more embodiments necessarily include logic for deciding, with or without other input or prompting, whether these features, elements, and/or steps are included or are to be always performed. The terms “comprising,” “including,” “having,” and the like are synonymous and are used inclusively, in an open-ended fashion, and do not exclude additional elements, features, acts, operations, and so forth. Also, the term “or” is used in its inclusive sense (and not in its exclusive sense) so that when used, for example, to connect a list of elements, the term “or” means one, some, or all of the elements in the list. Further, the term “each,” as used herein, in addition to having its ordinary meaning, can mean any subset of a set of elements to which the term “each” is applied.
Conjunctive language such as the phrase “at least one of X, Y, and Z,” unless specifically stated otherwise, is otherwise understood with the context as used in general to convey that an item, term, etc. may be either X, Y, or Z. Thus, such conjunctive language is not generally intended to imply that certain embodiments require the presence of at least one of X, at least one of Y, and at least one of Z.
Language of degree used herein, such as the terms “approximately,” “about,” “generally,” and “substantially” as used herein represent a value, amount, or characteristic close to the stated value, amount, or characteristic that still performs a desired function or achieves a desired result. For example, the terms “approximately”, “about”, “generally,” and “substantially” may refer to an amount that is within less than 10% of, within less than 5% of, within less than 1% of, within less than 0.1% of, and within less than 0.01% of the stated amount. As another example, in certain embodiments, the terms “generally parallel” and “substantially parallel” refer to a value, amount, or characteristic that departs from exactly parallel by less than or equal to 10 degrees, 5 degrees, 3 degrees, or 1 degree. As another example, in certain embodiments, the terms “generally perpendicular” and “substantially perpendicular” refer to a value, amount, or characteristic that departs from exactly perpendicular by less than or equal to 10 degrees, 5 degrees, 3 degrees, or 1 degree.
Although certain embodiments and examples have been described herein, it will be understood by those skilled in the art that many aspects of the systems and devices shown and described in the present disclosure may be differently combined and/or modified to form still further embodiments or acceptable examples. All such modifications and variations are intended to be included herein within the scope of this disclosure. A wide variety of designs and approaches are possible. No feature, structure, or step disclosed herein is essential or indispensable.
Any methods disclosed herein need not be performed in the order recited. The methods disclosed herein may include certain actions taken by a practitioner; however, they can also include any third-party instruction of those actions, either expressly or by implication.
The methods and tasks described herein may be performed and fully automated by a computer system. The computer system may in some cases, include multiple distinct computers or computing devices (e.g., physical servers, workstations, storage arrays, cloud computing resources, etc.) that communicate and interoperate over a network to perform the described functions. Each such computing device typically includes a processor (or multiple processors) that executes program instructions or modules stored in a memory or other non-transitory computer-readable storage medium or device (e.g., solid state storage devices, disk drives, etc.). The various functions disclosed herein may be embodied in such program instructions, and/or may be implemented in application-specific circuitry (e.g., ASICs or FPGAs) of the computer system. Where the computer system includes multiple computing devices, these devices may but need not, be co-located. The results of the disclosed methods and tasks may be persistently stored by transforming physical storage devices, such as solid-state memory chips and/or magnetic disks, into a different state. The computer system may be a cloud-based computing system whose processing resources are shared by multiple distinct business entities or other users.
Depending on the embodiment, certain acts, events, or functions of any of the processes or algorithms described herein can be performed in a different sequence, can be added, merged, or left out altogether (for example, not all described operations or events are necessary for the practice of the algorithm). Moreover, in certain embodiments, operations or events can be performed concurrently, e.g., through multi-threaded processing, interrupt processing, or multiple processors or processor cores or on other parallel architectures, rather than sequentially.
Various illustrative logical blocks, modules, routines, and algorithm steps that may be described in connection with the disclosure herein can be implemented as electronic hardware (e.g., ASICs or FPGA devices), computer software that runs on general purpose computer hardware, or combinations of both. Various illustrative components, blocks, and steps may be described herein generally in terms of their functionality. Whether such functionality is implemented as specialized hardware versus software running on general-purpose hardware depends upon the particular application and design constraints imposed on the overall system. The described functionality can be implemented in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the disclosure.
Moreover, various illustrative logical blocks and modules that may be described in connection with the disclosure herein can be implemented or performed by a machine, such as a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor can be a microprocessor, but in the alternative, the processor can be a controller, microcontroller, or state machine, combinations of the same, or the like. A processor can include electrical circuitry configured to process computer-executable instructions. A processor can include an FPGA or other programmable device that performs logic operations without processing computer-executable instructions. A processor can also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Although described herein primarily with respect to digital technology, a processor may also include primarily analog components. For example, some or all of the rendering techniques described herein may be implemented in analog circuitry or mixed analog and digital circuitry. A computing environment can include any type of computer system, including, but not limited to, a computer system based on a microprocessor, a mainframe computer, a digital signal processor, a portable computing device, a device controller, or a computational engine within an appliance, to name a few.
The elements of any method, process, routine, or algorithm described in connection with the disclosure herein can be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module can reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of a non-transitory computer-readable storage medium. An exemplary storage medium can be coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium can be integral to the processor. The processor and the storage medium can reside in an ASIC. The ASIC can reside in a user terminal. In the alternative, the processor and the storage medium can reside as discrete components in a user terminal.
While the above detailed description has shown, described, and pointed out novel features, it can be understood that various omissions, substitutions, and changes in the form and details of the devices or algorithms illustrated can be made without departing from the spirit of the disclosure. As can be recognized, certain portions of the description herein can be embodied within a form that does not provide all of the features and benefits set forth herein, as some features can be used or practiced separately from others. The scope of certain embodiments disclosed herein is indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.
This application claims benefit of U.S. Provisional Patent Application No. 63/483,491 filed Feb. 6, 2023, and titled “SYSTEMS FOR USING AN AURICULAR DEVICE CONFIGURED WITH AN INDICATOR AND BEAMFORMER FILTER UNIT.” The entire disclosure of each of the above items is hereby made part of this specification as if set forth fully herein and incorporated by reference for all purposes, for all that it contains. Any and all applications for which a foreign or domestic priority claim is identified in the Application Data Sheet as filed with the present application are hereby incorporated by reference under 37 CFR 1.57 for all purposes and for all that they contain.
Number | Date | Country | |
---|---|---|---|
63483491 | Feb 2023 | US |