This generally relates to enhancing a listening experience and, more particularly, to enhancing a user's listening experience by adjusting physical attributes of an audio playback system based on detected environmental attributes of the system's environment.
Some user electronic devices may be operative to playback audio data for a listening user. However, the quality of the listening experience is often diminished by variables in the device's environment.
Systems, methods, and computer-readable media are provided for enhancing a user's listening experience by adjusting physical attributes of an audio playback system based on detected environmental attributes of the system's environment.
As an example, a method of enhancing a listening experience of a user of an electronic device is provided that may include emitting sound waves from an audio output component of the electronic device using audio data electrical signals, detecting, with the electronic device, environmental attribute data indicative of an environmental attribute of an environment of the electronic device, processing the detected environmental attribute data, using the electronic device, to generate physical attribute adjustment data, and adjusting a physical attribute of the electronic device using the physical attribute adjustment data, wherein the physical attribute of the electronic device includes an orientation of the audio output component with respect to the environment, a position of a sound wave reflecting component with respect to the audio output component, a geometry of a sound wave passageway for the emitted sound waves, or a tautness of a membrane of the audio output component.
As an example, an electronic device is provided that may include a lower housing structure including an audio output component that emits sound waves into an environment of the electronic device, an upper housing structure including a display output component, a hinge structure coupling the lower housing structure to the upper housing structure, a sensor input component that detects environmental attribute data indicative of an environmental attribute of the environment of the electronic device, and a movement output component that adjusts the position of the upper housing structure with respect to the lower housing structure through rotation about the hinge structure based on the detected environmental attribute data for changing the reflection of the sound waves in the environment.
As yet another example, a product is provided that may include a non-transitory computer-readable medium and computer-readable instructions, stored on the computer-readable medium, that, when executed, are effective to cause a computer to detect environmental attribute data indicative of an environmental attribute of an ambient environment of the computer and adjust a physical attribute of the computer based on the environmental attribute data, wherein the physical attribute includes a position of an element of an audio output component of the computer with respect to the ambient environment of the computer, and wherein the environmental attribute includes geometry of the ambient environment, location of the user with respect to the audio output component, geometry of an ear of the user, and otoacoustic emission of an ear of the user.
This Summary is provided only to present some example embodiments, so as to provide a basic understanding of some aspects of the subject matter described in this document. Accordingly, it will be appreciated that the features described in this Summary are only examples and should not be construed to narrow the scope or spirit of the subject matter described herein in any way. Unless otherwise stated, features described in the context of one example may be combined or used with features described in the context of one or more other examples. Other features, aspects, and advantages of the subject matter described herein will become apparent from the following Detailed Description, Figures, and Claims.
The discussion below makes reference to the following drawings, in which like reference characters refer to like parts throughout, and in which:
In the following detailed description, for purposes of explanation, numerous specific details are set forth to provide a thorough understanding of the various embodiments described herein. Those of ordinary skill in the art will realize that these various embodiments are illustrative only and are not intended to be limiting in any way. Other embodiments will readily suggest themselves to such skilled persons having the benefit of this disclosure.
In addition, for clarity purposes, not all of the routine features of the embodiments described herein are shown or described. One of ordinary skill in the art will readily appreciate that in the development of any such actual embodiment, numerous embodiment-specific decisions may be required to achieve specific design objectives. These design objectives will vary from one embodiment to another and from one developer to another. Moreover, it will be appreciated that such a development effort might be complex and time-consuming, but would nevertheless be a routine engineering undertaking for those of ordinary skill in the art having the benefit of this disclosure.
Systems, methods, and computer-readable media for enhancing a user's listening experience by adjusting physical attributes of an audio playback system based on detected environmental attributes of the system's environment are provided and described with reference to
System 1 may be configured to detect any suitable environmental attributes of a current environment of system 1, including, but not limited to, the geometry (e.g., size and/or shape) of a room or defined space of the environment, the location and/or orientation of one or more system users within the environment relative to the sound wave emitting subassembly of device 100 (e.g., distance of a user from sound wave emitting subassembly and/or orientation of the ears with respect to the sound wave emitting subassembly), the specific identity or class identity of one or more system users within the environment, the geometry (e.g., size and/or shape) and/or the exposition of the ears of one or more system users within the environment relative to the sound wave emitting subassembly of device 100, the otoacoustic emissions (e.g., spontaneous otoacoustic emissions and/or evoked otoacoustic emissions) of the ears of one or more system users within the environment, the ambient noise level or other audio qualities of the environment distinct from any sound waves emitted by system 1, any audio qualities of the environment including the sound waves emitted by system 1, and/or the like. Electronic device 100 and/or any auxiliary assembly 200 of system 1 may include any suitable input component(s) (e.g., environmental attribute sensor input component(s)) that may be operative to detect any suitable environmental attribute of the environment of system 1 (e.g., cameras, ultrasonic sensors, infrared light sensors, microphones, temperature sensors, etc.) and/or may include any suitable communication component that may be operative to receive any suitable data indicative of any suitable environmental attribute of the environment of system 1 from any suitable remote data source (e.g., a data server (not shown) that may be operative to share data indicative of any suitable architectural characteristics of the environment and/or data indicative of a particular user's ear structure or preferred audio equalization settings).
Before or while a sound wave emitting subassembly (e.g., any suitable transducer or driver that may be operative to receive audio data electrical signals and convert or transduce the received electrical signals into corresponding sound waves) of electronic device 100 may emit sound waves into the environment of system 1, system 1 may be configured to adjust, based on any detected environmental attributes of the environment of system 1, any suitable physical system attributes of system 1, including, but not limited to, the orientation of any element(s) of the sound wave emitting subassembly of device 100 with respect to any element(s) of the environment (e.g., the ears of a system user) in any one or more degrees of freedom (e.g., about any one or more axes of a three-dimensional Cartesian coordinate system for the environment), the geometry (e.g., size and/or shape) of any element(s) of the sound wave emitting subassembly of device 100, the location and/or orientation of any suitable sound wave reflecting component of device 100 and/or of any auxiliary assembly 200 relative to the sound wave emitting subassembly of device 100 and/or relative to any element(s) of the environment (e.g., the ears of a detected system user), the magnitude of any suitable movement (e.g., vibration, force, movement, actuator stroke, etc.) of any suitable movement output component, such as a movement output component embedded within or coupled to a sound wave reflecting component of device 100 and/or of any auxiliary assembly 200, and/or the like. In some embodiments, adjustment of one or more physical system attributes of system 1 may be based not only on any detected environmental attribute(s) of the environment of system 1 but also on any suitable characteristics of the sound waves emitted into the environment of system 1 by the sound wave emitting subassembly of device 100. Any physical system attribute adjustment may be made by system 1 to enhance the experience of a system user listening to the sound waves emitted by the sound wave emitting subassembly of device 100. Electronic device 100 and/or any auxiliary assembly 200 of system 1 may include any suitable output component(s) (e.g., physical or mechanical output components) that may be operative to be moved for adjusting any suitable physical system attributes of system 1 (e.g., sound reflecting surfaces, motors, piezoelectric actuators, etc.).
Electronic device 100 of system 1 may be any portable, wearable, mobile, or hand-held electronic device configured to emit sound waves, detect environmental attributes of its environment, and/or adjust physical attributes of system 1 to enhance a user's experience listening to the emitted sound waves. Alternatively, electronic device 100 may not be portable at all, but may instead be generally stationary. Electronic device 100 can include, but is not limited to, an audio player, game player, other media player, radio, medical equipment, domestic appliance, transportation vehicle instrument, musical instrument, cellular telephone (e.g., an iPhone™ available by Apple Inc.), other wireless communication device, personal digital assistant, remote control, pager, computer (e.g., a desktop, laptop, tablet, server, etc.), monitor, television, stereo equipment, set up box, set-top box, wearable device (e.g., an Apple Watch™ by Apple Inc.), boom box, modem, router, printer, and combinations thereof. Electronic device 100 may include any suitable control circuitry or processor 102, memory 104, communications component 106, power supply 108, input component 110, and output component 112. Electronic device 100 may also include a bus 114 that may provide one or more wired or wireless communication links or paths for transferring data and/or power to, from, or between various other components of device 100. Device 100 may also be provided with a housing 101 that may at least partially enclose one or more of the components of device 100 for protection from debris and other degrading forces external to device 100. In some embodiments, one or more of the components may be provided within its own housing (e.g., input component 110 may be an independent keyboard or mouse within its own housing that may wirelessly or through a wire communicate with processor 102, which may be provided within its own housing). In some embodiments, one or more components of electronic device 100 may be combined or omitted. Moreover, electronic device 100 may include other components not combined or included in
Memory 104 may include one or more storage mediums, including for example, a hard-drive, flash memory, permanent memory such as read-only memory (“ROM”), semi-permanent memory such as random access memory (“RAM”), any other suitable type of storage component, or any combination thereof. Memory 104 may include cache memory, which may be one or more different types of memory used for temporarily storing data for electronic device applications. Memory 104 may store media data (e.g., audio (e.g., music) and image and other media files), software (e.g., applications for implementing functions on device 100 (e.g., media playback applications and system environment processing applications)), firmware, preference information (e.g., media playback preferences), lifestyle information (e.g., food preferences), exercise information (e.g., information obtained by exercise monitoring equipment), transaction information (e.g., information such as credit card information), wireless connection information (e.g., information that may enable device 100 to establish a wireless connection), subscription information (e.g., information that keeps track of podcasts or television shows or other media a user subscribes to), contact information (e.g., telephone numbers and e-mail addresses), calendar information, any other suitable data, or any combination thereof.
Communications component 106 may be provided to allow device 100 to communicate with one or more other electronic devices or servers or subsystems (e.g., one or more auxiliary assemblies (e.g., assembly 200 of
Power supply 108 may provide power to one or more of the components of device 100. In some embodiments, power supply 108 can be coupled to a power grid (e.g., when device 100 is not a portable device, such as a desktop computer). In some embodiments, power supply 108 can include one or more batteries for providing power (e.g., when device 100 is a portable device, such as a cellular telephone). As another example, power supply 108 can be configured to generate power from a natural source (e.g., solar power using solar cells).
One or more input components 110 may be provided to permit a user to interact or interface with device 100 (e.g., to provide any suitable user control data) and/or to detect any suitable environmental attributes of the environment of system 1 certain information about the ambient environment. For example, input component 110 can take a variety of forms, including, but not limited to, a touch pad, trackpad, dial, click wheel, scroll wheel, touch screen, one or more buttons (e.g., a keyboard), mouse, joy stick, track ball, switch, photocell, force-sensing resistor (“FSR”), encoder (e.g., rotary encoder and/or shaft encoder that may convert an angular position or motion of a shaft or axle to an analog or digital code), microphone, camera, scanner (e.g., a three-dimensional scanner that may identify the three-dimensional geometry (e.g., shape and/or size) of any suitable structure (e.g., the ear of a user), a barcode scanner or any other suitable scanner that may obtain product identifying information from a code, such as a linear barcode, a matrix barcode (e.g., a quick response (“QR”) code), or the like), proximity sensor (e.g., capacitive proximity sensor), biometric sensor (e.g., a fingerprint reader or other feature recognition sensor, which may operate in conjunction with a feature-processing application that may be accessible to electronic device 100 for authenticating or otherwise identifying or detecting a user), line-in connector for data and/or power, force sensor (e.g., any suitable capacitive sensors, pressure sensors, strain gauges, sensing plates (e.g., capacitive and/or strain sensing plates), etc.), ultrasonic sensor, thermal and/or temperature sensor (e.g., thermistor, thermocouple, thermometer, silicon bandgap temperature sensor, bimetal sensor, etc.) for detecting the temperature of a portion of electronic device 100 or an ambient environment thereof, a performance analyzer for detecting an application characteristic related to the current operation of one or more components of electronic device 100 (e.g., processor 102), motion sensor (e.g., single axis or multi axis accelerometers, angular rate or inertial sensors (e.g., optical gyroscopes, vibrating gyroscopes, gas rate gyroscopes, or ring gyroscopes), linear velocity sensors, and/or the like), magnetometer (e.g., scalar or vector magnetometer), pressure sensor, light sensor (e.g., ambient light sensor (“ALS”), infrared (“IR”) sensor, etc.), acoustic sensor, sonic or sonar sensor, radar sensor, image sensor, video sensor, any suitable device locating subsystem or global positioning system (“GPS”) detector or subsystem, radio frequency (“RF”) detector, RF or acoustic Doppler detector, RF triangulation detector, electrical charge sensor, peripheral device detector, event counter, and any combinations thereof. Each input component 110 can be configured to provide one or more dedicated control functions for making selections or issuing commands associated with operating device 100.
One or more output components 112 may be provided to present information (e.g., graphical, audible, and/or tactile information) to a user of device 100 and/or to adjust any physical system attribute of system 1. For example, output component 112 can take a variety of forms, including, but not limited to, a sound wave emitting subassembly (e.g., any suitable transducer or driver subassembly that may be operative to receive audio data electrical signals (e.g., of an audio or other suitable media file or streamed data that may be accessible to device 100) and to convert or transduce the received electrical signals into corresponding sound waves), a sound wave reflecting subassembly (e.g., any suitable physical or mechanical sound wave reflecting component(s) that may be operative to reflect sound waves in any suitable manner) that may be moved in one or more directions (e.g., with respect to a sound wave emitting subassembly), any suitable physical or mechanical movement output component that may be operative to be moved for adjusting any suitable physical system attribute(s) of system 1 (e.g., motors, piezoelectric actuators, etc.) and that may be embedded within or coupled to a sound wave reflecting component or any other suitable component of device 100, data and/or power line-out, visual display (e.g., for transmitting data via visible light and/or via invisible light), antenna, infrared port, flash (e.g., light sources for providing artificial light for illuminating an environment of the device), tactile/haptic component (e.g., rumblers, vibrators, etc.), taptic component (e.g., components that are operative to provide tactile sensations in the form of vibrations), and any combinations thereof.
It should be noted that one or more input components 110 and one or more output components 112 may sometimes be referred to collectively herein as an input/output (“I/O”) component or I/O interface 111 (e.g., input component 110 and display 112 as I/O component or 1/O interface 111). For example, input component 110 and display 112 may sometimes be a single I/O component 111, such as a touch screen that may receive input information through a user's touch of a display screen and that may also provide visual information to a user via that same display screen, or such as a transducer that may receive audio input information from a user when operating as a microphone and that may provide audio information to a user when operating as a speaker.
Processor 102 of device 100 may include any processing circuitry operative to control the operations and performance of one or more components of electronic device 100. For example, processor 102 may be used to run one or more applications, such as an application 103. Application 103 may include, but is not limited to, one or more operating system applications, firmware applications, media playback applications and/or environmental attribute processing applications and/or physical system attribute adjustment applications (e.g., a combined listening enhancement application), media editing applications, pass applications, calendar applications, state determination applications (e.g., device state determination applications, auxiliary assembly state determination applications), biometric feature-processing applications, compass applications, health applications, thermometer applications, weather applications, thermal management applications, force sensing applications, device diagnostic applications, video game applications, or any other suitable applications. For example, processor 102 may load application 103 as a user interface program or any other suitable program to determine how instructions or data received via an input component 110 and/or via any other component of device 100 (e.g., environmental attribute data or auxiliary assembly state/capability data from any auxiliary assembly 200 via communications component 106, etc.) may manipulate the one or more ways in which information may be stored on device 100 (e.g., in memory 104) and/or in which information may be provided to a user and/or in which physical system attributes may be adjusted via an output component 112 and/or in which auxiliary assembly control data may be provided to a remote subsystem (e.g., to one or more auxiliary assemblies 200 via communications component 106). Application 103 may be accessed by processor 102 from any suitable source, such as from memory 104 (e.g., via bus 114) or from another device or server (e.g., from auxiliary assembly 200 via communications component 106 and/or from any other suitable remote data source (e.g., remote data server) via communications component 106). Electronic device 100 (e.g., processor 102, memory 104, or any other components available to device 100) may be configured to process data and/or generate commands at various resolutions, frequencies, and various other characteristics as may be appropriate for the capabilities and resources of device 100. Processor 102 may include a single processor or multiple processors. For example, processor 102 may include at least one “general purpose” microprocessor, a combination of general and special purpose microprocessors, instruction set processors, audio processing units or sound cards, graphics processors, video processors, and/or related chips sets, and/or special purpose microprocessors. Processor 102 also may include on board memory for caching purposes. Processor 102 may be implemented as any electronic device capable of processing, receiving, or transmitting data or instructions. For example, processor 102 can be a microprocessor, a central processing unit, an application-specific integrated circuit, a field-programmable gate array, a digital signal processor, an analog circuit, a digital circuit, or combination of such devices. Processor 102 may be a single-thread or multi-thread processor. Processor 102 may be a single-core or multi-core processor. Accordingly, as described herein, the term “processor” may refer to a hardware-implemented data processing device or circuit physically structured to execute specific transformations of data including data operations represented as code and/or instructions included in a program that can be stored within and accessed from a memory. The term is meant to encompass a single processor or processing unit, multiple processors, multiple processing units, analog or digital circuits, or other suitably configured computing element or combination of elements.
Auxiliary assembly 200 may be any suitable assembly that may be configured to detect any suitable environmental attributes of the environment of system 1 and/or adjust any suitable physical system attributes of assembly 200. Auxiliary assembly 200 may include any suitable control circuitry or processor 202, which may be similar to any suitable processor 102 of device 100, application 203, which may be similar to any suitable application 103 of device 100, memory 204, which may be similar to any suitable memory 104 of device 100, communications component 206, which may be similar to any suitable communications component 106 of device 100, power supply 208, which may be similar to any suitable power supply 108 of device 100, input component 210, which may be similar to any suitable input component 110 of device 100, output component 212, which may be similar to any suitable output component 112 of device 100, I/O interface 211, which may be similar to any suitable I/O interface 111 of device 100, bus 214, which may be similar to any suitable bus 114 of device 100, and/or housing 201, which may be similar to any suitable housing 101 of device 100. In some embodiments, one or more components of auxiliary assembly 200 may be combined or omitted. Moreover, auxiliary assembly 200 may include other components not combined or included in
As shown in
Hinge housing 110h may provide support for any suitable components, such as a movement output component 112g, which may be operative to rotatably adjust (e.g., automatically without physical user interaction) the position of upper housing 101u with respect to lower housing 101l (e.g., to adjust the magnitude of angle θ therebetween) such that one or more surfaces of at least a portion of upper housing 110h and/or display output component 112e or otherwise may also be operative to function as a sound wave reflecting subassembly for reflecting sound waves emitted from sound wave emitting subassembly output component 112a and/or from sound wave emitting subassembly output component 112b in any suitable direction (e.g., a magnitude of rotatable adjustment of the position of upper housing 101u with respect to the position of lower housing 101l and sound wave emitting subassembly output components 112a and 112b by movement output component 112g may be a physical system attribute that may be adjusted for enhancing a user's listening experience).
As shown in
As also shown in
As also shown in
Additionally or alternatively, as shown in
Adjustment component(s) 402 may be controlled to move one or more elements 412i with respect to one or more other elements 412i within structure 412s for adjusting any suitable physical characteristic of one or more openings 401o. For example, an adjustment component 402 may receive any suitable physical system attribute adjustment data (e.g., from processor 102) that may be operative to control that adjustment component 402 to adjust the position of its associated element 412i in any suitable manner, such as by moving the entirety or at least a portion of element 412i in the +X direction or the −X direction along an X-axis (e.g., to move a vertical element closer to or farther away from an adjacent vertical element (e.g., for adjusting a dimension m of one or more openings 401o)), moving the entirety or at least a portion of element 412i in the +Y direction or the −Y direction along a Y-axis (e.g., to move a horizontal element closer to or farther away from an adjacent horizontal element (e.g., for adjusting a dimension n of one or more openings 4010)), moving the entirety or at least a portion of element 412i in the +Z direction or the −Z direction along a Z-axis (e.g., to pull portions of an interlaced mesh closer to or farther away from output component 112a and/or opening(s) 101o), rotating the entirety or at least a portion of element 412i in the S1 direction or the S2 direction about the Z-axis (e.g., to adjust the angular orientation of two or more elements (e.g., for adjusting the size of an angular dimension γ between crossing elements)), rotating the entirety or at least a portion of element 412i in the R1 direction or the R2 direction about the X-axis (e.g., to adjust the angular orientation of elements (e.g., rotating a horizontal element 412i about its center C for adjusting the size of dimension n of opening 401o between elements when a cross-sectional shape of one or more of the elements is non-circular (e.g., an isosceles triangle, as shown in
As also shown in
As also shown, one or more reflective structures 112cs of component 112c may have embedded therein or otherwise coupled thereto one or more discrete movement output components 112cm (e.g., a piezoelectric actuator), where each one of such movement output components 112cm may be independently controlled (e.g., by any suitable physical system attribute adjustment data received processor 102) to adjust the magnitude of a discrete movement of the movement component (e.g., a discrete vibration, etc.) that may be operative to affect any sound wave(s) reflecting off of the reflective structure 112cs associated with the movement component. Similarly, movement component 112f of device 100 (e.g., behind display output component 112e) may be one or more discrete movement output components (e.g., a piezoelectric actuator), where each one of such movement output components may be independently controlled (e.g., by any suitable physical system attribute adjustment data received processor 102) to adjust the magnitude of a discrete movement of the movement component (e.g., a discrete vibration, etc.) that may be operative to affect any sound wave(s) reflecting off of a reflective surface associated with the movement component (e.g., a surface of display output component 112e). Similarly, movement component 112d of device 100 (e.g., within housing structure 101l) may be one or more discrete movement output components (e.g., a piezoelectric actuator) that may be independently controlled (e.g., by any suitable physical system attribute adjustment data received processor 102) to adjust the magnitude of a discrete movement of the movement component (e.g., a discrete vibration, etc.) that may be operative to affect any sound wave(s) emitted by output component 112a and/or to vibrate against table T for supplementing any sound wave(s) emitted by output component 112a. Additionally, as shown, housing 101l may include a microphone input component 110d and/or any other suitable sensing input components that may be operative to detect any suitable environmental attributes of environment E, such as the otoacoustic emissions (e.g., spontaneous otoacoustic emissions and/or evoked otoacoustic emissions) of the ears of user U1 and/or user U2, the ambient noise level or other audio qualities of environment E distinct from any sound waves emitted by sound emitting subassembly output component 112a and/or by sound emitting subassembly output component 112b, any audio qualities of environment E including any sound waves emitted by system 1 (e.g., emitted sound wave SW and/or reflected sound wave SWR or any other sound waves within environment E), and/or the like.
Auxiliary assembly 200a may be removably coupled to a side of housing 101 of electronic device 100 and may include an output component 212a that may be similar to movement output component/sound wave reflecting output component 112c, with or without one or more discrete movement components, such that assembly 200a may be operative to be positioned in any suitable manner to reflect or otherwise manipulate sound waves emitted from output component 112b in any suitable manner. Similarly, auxiliary assembly 200b may be coupled to ceiling CL and assembly 200c may be coupled to left wall LW and assembly 200f may be resting on a top surface of furniture N, each of which may be similar to movement output component/sound wave reflecting output component 112c, with or without one or more discrete movement components, such that each assembly may be operative to be positioned in any suitable manner to reflect or otherwise manipulate any sound waves that may reach any suitable surface(s) of the assembly.
Auxiliary assembly 200d may be worn by user U1 in any suitable manner, such as about the user's head, such that different portions of assembly 200d may physically interact with different portion of the user's head. For example, a first output component 212b of assembly 200d may be operative to be positioned adjacent user U1's left ear such that physical system attribute adjustment of output component 212b may physically manipulate the physical structure of user U1's left ear (e.g., based on any suitable physical system attribute adjustment data 99 from device 100, which may adjust the shape of the ear to better receive sound waves (e.g., to change the frequency response of the ear to enhance the listening experience of user U1)). Assembly 200d may also include a microphone input component 210a that may be operative to detect any suitable environmental attributes of environment E, such as the otoacoustic emissions (e.g., spontaneous otoacoustic emissions and/or evoked otoacoustic emissions) of the left ear of user U1, the ambient noise level or other audio qualities of environment E distinct from any sound waves emitted by sound emitting subassembly output component 112a and/or by sound emitting subassembly output component 112b, any audio qualities of environment E including any sound waves emitted by system 1 (e.g., emitted sound wave SW and/or reflected sound wave SWR or any other sound waves within environment E), and/or the like. Similarly a second output component 212c of assembly 200d may be operative to be positioned adjacent user U1's right ear such that physical system attribute adjustment of output component 212c may physically manipulate the physical structure of user U1's right ear (e.g., based on any suitable physical system attribute adjustment data 99 from device 100, which may adjust the shape of the ear to better receive sound waves (e.g., to change the frequency response of the ear to enhance the listening experience of user U1)). Assembly 200d may also include a microphone input component 210b that may be operative to detect any suitable environmental attributes of environment E, such as the otoacoustic emissions (e.g., spontaneous otoacoustic emissions and/or evoked otoacoustic emissions) of the right ear of user U1, the ambient noise level or other audio qualities of environment E distinct from any sound waves emitted by sound emitting subassembly output component 112a and/or by sound emitting subassembly output component 112b, any audio qualities of environment E including any sound waves emitted by system 1 (e.g., emitted sound wave SW and/or reflected sound wave SWR or any other sound waves within environment E), and/or the like. A third output component 212d of assembly 200d may be operative to be positioned against a back of user U1's head as a discrete movement output component such that physical system attribute adjustment of output component 212d may physically vibrate against the head of user U1 in a particular manner to supplement the sensation of any sensed sound waves (e.g., based on any suitable physical system attribute adjustment data 99 from device 100), which may enhance the listening experience of user U1). Assembly 200e may be a handheld assembly of user U1 (e.g., a smartphone) that may be operative to communicate any suitable data to device 100 (e.g., the identify of user U1, the location of user U1, the shape of each ear of user U1 (e.g., if prompted to provided such information by device 100), and/or the like.
Any one or more of assemblies 200a-200f may include any other suitable output components that may be operative to adjust any suitable physical attribute of that assembly (e.g., based on any suitable physical system attribute adjustment data 99 from device 100), such as a sound wave reflecting subassembly output component (e.g., any suitable physical or mechanical sound wave reflecting component(s) that may be operative to reflect sound waves in any suitable manner) and that may be moved in one or more directions within environment E (e.g., with respect to a sound wave emitting subassembly of device 100 and/or with respect to a user or otherwise), any suitable physical or mechanical movement output component that may be operative to be moved for adjusting any suitable physical system attribute(s) of the assembly (e.g., motors, piezoelectric actuators, etc.) and that may be embedded within or coupled to a sound wave reflecting component or any other suitable component of the assembly, and/or the like. Additionally or alternatively, each one of assemblies 200a-200f may include any suitable input component that may be operative to detect any suitable environmental attribute(s) of environment E (e.g., for providing any suitable detected environmental attribute data 91 for use by device 100).
Therefore, as may be illustrated in
It is understood that the operations shown in process 500 of
Moreover, the processes described with respect to
It is to be understood that program modules and/or various processes or operations of system 1 may be provided as a software construct, firmware construct, one or more hardware components, or a combination thereof. For example, various processes or operations or modules of system 1 may be described in the general context of computer-executable instructions, such as program modules, that may be executed by one or more computers or other devices. Generally, a program module may include one or more routines, programs, objects, components, and/or data structures that may perform one or more particular tasks or that may implement one or more particular abstract data types. It is also to be understood that the number, configuration, functionality, and interconnection of the modules are merely illustrative, and that the number, configuration, functionality, and interconnection of existing modules may be modified or omitted, additional modules may be added, and the interconnection of certain modules may be altered.
At least a portion of one or more of the processes or operations or modules of system 201 may be stored in or otherwise accessible to device 100 in any suitable manner (e.g., in memory 104 of device 100 or via communications component 106 of device 100 and/or in memory 204 of device 200 or via communications component 206 of device 200). Each module of system 201 may be implemented using any suitable technologies (e.g., as one or more integrated circuit devices), and different modules may or may not be identical in structure, capabilities, and operation. Any or all of the processes or operations or modules or other components of system 201 may be mounted on an expansion card, mounted directly on a system motherboard, or integrated into a system chipset component (e.g., into a “north bridge” chip). System 201 may include any amount of dedicated sound processing memory.
Many alterations and modifications of the preferred embodiments will no doubt become apparent to a person of ordinary skill in the art after having read the foregoing description, it is to be understood that the particular embodiments shown and described by way of illustration are in no way intended to be considered limiting. Thus, references to the details of the described embodiments are not intended to limit their scope. Therefore, obvious substitutions now or later known to one with ordinary skill in the art are defined to be within the scope of the defined elements. It is also to be understood that various directional and orientational terms, such as “up” and “down,” “front” and “back,” “exterior” and “interior,” “top” and “bottom” and “side,” “length” and “width” and “depth,” “thickness” and “diameter” and “cross-section” and “longitudinal,” “X-” and “Y-” and “Z-,” and the like may be used herein only for convenience, and that no fixed or absolute directional or orientational limitations are intended by the use of these words.
This application is a continuation of U.S. patent application Ser. No. 15/657,844, filed Jul. 24, 2017 (now U.S. Pat. No. 10,237,644), which claims the benefit of U.S. Provisional Patent Application No. 62/398,900, filed Sep. 23, 2016, each of which is hereby incorporated by reference herein in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
20060109989 | Linhard | May 2006 | A1 |
20100272271 | Hayakawa | Oct 2010 | A1 |
20120023468 | Fischer | Jan 2012 | A1 |
20130182882 | Hart | Jul 2013 | A1 |
20160014367 | Yeo | Jan 2016 | A1 |
20160182990 | Yamamoto | Jun 2016 | A1 |
Entry |
---|
Yeung, “Doppler Labs raises $24 million led by The Chemin Group for wireless listening system.” Jul. 19, 2016, 4 pages, https://venturebeat.com/2016/07/19/doppler-labs-raises-24-million-led-by-the-chemin-group-for-wireless-listening-system/. |
Number | Date | Country | |
---|---|---|---|
62398900 | Sep 2016 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15657844 | Jul 2017 | US |
Child | 16355128 | US |