This disclosure relates generally to audio devices, and more specifically to audio devices augmented with acoustic metamaterials.
Conventional audio devices typically are built using naturally occurring materials. These naturally occurring materials have properties that can function as constraints that in part control size, weight, and power of conventional audio devices. In addition, when designing audio modules having a small form factor, space is an important constraint. However, limited space may introduce high stiffness, potentially lowering speaker output, and may also introduce unwanted resonances.
In accordance with some embodiments, an audio system includes a transducer configured to emit sound, and an enclosure containing the transducer and forming a front volume and a rear volume on opposite sides of the transducer. In some embodiments, the enclosure includes an output port configured to output audio from the front volume that is generated by the transducer, and a plurality of channels formed within the rear volume. The plurality of channels together form an acoustic metamaterial configured to virtually increase the rear volume, to amplify a bass portion of the sound to form audio content that is presented to a user.
In some embodiments, the plurality of channels virtually increase the rear volume, such that an equivalent acoustic volume (Vas) of the audio system is greater than the volume of the front and rear volumes.
In some embodiments, the plurality of channels comprise channels of a plurality of different depths. The depths of the channels may be selected to attenuate one or more selected resonance frequencies. For example, the depths of one or more channels may be selected such that the channels function as quarter wave resonators for frequencies of the one or more selected resonance frequencies. In some embodiments, the plurality of channels have dimensions selected to match an impedance of the front volume.
In some embodiments, the plurality of channels include one or more straight channels that extend in a direction orthogonal to a surface area of the transducer. In some embodiments, the plurality of channels include one or more channels having at least one bend. In some embodiments, the plurality of channels include one or more channels coiled around the transducer. In some embodiments, the plurality of channels are formed within a frame of a headset.
The FIGURES depict embodiments of the present disclosure for purposes of illustration only. One skilled in the art will readily recognize from the following description that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles, or benefits touted, of the disclosure described herein.
An audio system is provided that includes one or more audio assemblies and an audio controller (e.g., controls audio content output by the audio system). The one or more audio assemblies include a speaker, which may be integrated into an artificial reality system.
The audio assembly includes at least some of a speaker enclosure, also referred to herein as an “enclosure.” In some embodiments, at least a portion of the enclosure is part of a device (e.g., a personal audio device) the audio assembly couples to. A personal audio device is a device worn and/or carried by a user that includes the audio system, and is configured to present audio to a user via the audio system. A personal audio device may be, e.g., a headset, a cell phone, a table, some other device configured to present audio to a user via the audio system, or some combination thereof. In other embodiments the audio assembly includes all of the enclosure, and the whole enclosure couples to a device (e.g., a headset). For example, in some embodiments, the audio assembly may be integrated or coupled into a portion of a frame of a headset, e.g., integrated into a leg portion of the frame. In some embodiments, the enclosure is integrated into a temple portion of a leg portion of the frame, the temple portion corresponding to a temple region on a user's head. The audio assembly has a small form factor (e.g. having a physical volume less than 5 cubic centimeters) and weight (e.g., less than 2 grams), which may result in a more comfortable experience for the user of the headset with the audio assembly integrated into the temple portion of the frame, without sacrificing audio quality and/or audio volume.
In some embodiments, the enclosure of the speaker may contain acoustic metamaterials. Acoustic metamaterials can manipulate and control soundwaves in ways that are not possible using conventional materials. In some embodiments, this sound wave control is achieved by manipulating the transmission medium parameters such as bulk modulus, density, chirality, etc. Density and bulk modulus are both parameters of the acoustic refractive index, which manipulating them can result in changes in the speed of sound in that medium. In some embodiments, this can be achieved by designing a series of sub-wavelength structures to parametrically tune these parameters, thus achieving a more efficient way to deliver high quality sound to the ear. As such, as used herein, acoustic metamaterials may refer to synthetic composite materials serving as a transmission medium for sound emitted by a speaker, a material shaped into physical structures configured to tune emitted sound in a desired manner, and/or a combination thereof.
Various embodiments of audio devices that are augmented with acoustic metamaterials are described herein. The audio devices may be wearable devices. A wearable device may be, e.g., a headset (e.g., as described below with regard to
Embodiments discussed herein include several ways to use the audio metamaterial to enhance the quality (e.g., frequency bandwidth, sound clarity, loudness) of audio devices. In some embodiments, audio metamaterials are used to virtually enhance a back-volume for a tight back-volume monopole closed-box speaker. This can increase the audio bandwidth, especially to increase the bass performance. In other embodiments, audio metamaterials may be used as an absorption material to attenuate unwanted high-frequency resonances. In other embodiments, audio metamaterials may include straight or meandering tubes configured to alter the effective air density and sound speed within the structure.
Embodiments of the invention may include or be implemented in conjunction with an artificial reality system. Artificial reality is a form of reality that has been adjusted in some manner before presentation to a user, which may include, e.g., a virtual reality (VR), an augmented reality (AR), a mixed reality (MR), a hybrid reality, or some combination and/or derivatives thereof. Artificial reality content may include completely generated content or generated content combined with captured (e.g., real-world) content. The artificial reality content may include video, audio, haptic feedback, or some combination thereof, any of which may be presented in a single channel or in multiple channels (such as stereo video that produces a three-dimensional effect to the viewer). Additionally, in some embodiments, artificial reality may also be associated with applications, products, accessories, services, or some combination thereof, that are used to create content in an artificial reality and/or are otherwise used in an artificial reality. The artificial reality system that provides the artificial reality content may be implemented on various platforms, including a wearable device (e.g., headset) connected to a host computer system, a standalone wearable device (e.g., headset), a mobile device or computing system, or any other hardware platform capable of providing artificial reality content to one or more viewers.
Audio Assembly
Additionally, the speaker 110 may have a small size suitable for various design requirements, such as for integration with a headset. In some embodiments, the speaker 110 is a speaker with a total area of the membrane 120 being less than 200 square millimeters.
The speaker 110 includes components not shown in
In some embodiments, a resonant frequency of the speaker 110 may be based upon a Cms of the membrane 120, e.g., a speaker 110 having a high Cms membrane 120 with low stiffness may have a lower resonant frequency, relative to speakers of the same size having a lower Cms membrane, according to some embodiments. Lower resonant frequencies enable the audio system to have a larger bandwidth and improved performance at low frequencies, than audio systems with higher resonant frequencies. In some embodiments, the speaker 110, which may be designed for use in consumer electronics or wearables, has a resonant frequency ranging between 50 Hz to 2000 Hz.
The enclosure 130 forms a cavity 140 (also referred to as a rear cavity, rear volume, or back volume) behind the speaker 110. In conventional speakers, the cavity may be in the form of a sealed box (e.g., a closed cavity) or a vented box (e.g., a cavity with one or more open rear ports) sized to achieve a desired sound pressure level (SPL) for the speaker 110. In some embodiments, the cavity is formed into a plurality of long, narrow channels 150, in order to achieve improved bass performance in comparison to a speaker with rear cavity formed as a sealed box with the same total volume, or similar acoustic performance (e.g., same SPL) with a smaller volume, by virtually increasing the volume of the rear cavity. As used herein, “virtually” increasing the volume of the rear cavity may refer to shaping the rear cavity (e.g., by forming the rear cavity to include the plurality of channels) to modify the acoustic properties of the rear cavity to be equivalent to that of a conventional sealed box rear cavity having a larger volume than the actual volume of the rear cavity, e.g., such that an equivalent acoustic volume (Vas) of the rear cavity may exceed that of the actual volume of the rear cavity by ˜30% in a small form factor microspeaker. This may be advantageous for audio systems used in headsets, where the audio assembly may need to fit in a relatively small space. Thus, the audio assemblies may satisfy the design requirements for a variety of headset configurations without sacrificing audio performance and/or audio volume. In comparison to other audio systems of comparable size and weight, the audio system may generate sounds at a higher sound volume with the same or less electrical power input, according to some embodiments. For instance, using the same electrical power input, the audio system may generate sounds at a higher sound volume than a comparably sized audio system that does not include acoustic metamaterial where the rear cavity is formed into a plurality of channels, according to some embodiments.
In some embodiments, such as that illustrated in
In some embodiments, as illustrated in
In some embodiments, the enclosure 130 may further form a front cavity (or front volume, not shown in
Headset
As discussed above, in some embodiments, the audio assembly 100 may be implemented as part of a headset, e.g., integrated into a temple portion of an eyewear device.
The frame 205 holds the other components of the headset 200. The frame 205 includes a front part that holds the one or more display elements 210 and end pieces (e.g., temples) to attach to a head of the user. The front part of the frame 205 bridges the top of the nose of the user. The length of the end pieces may be adjustable (e.g., adjustable temple length) to fit different users. The end pieces may also include a portion that curls behind the ear of the user (e.g., temple tip, ear piece). The end piece may also be referred to herein as a “leg portion of the frame.”
The one or more display elements 210 provide light to a user wearing the headset 200. As illustrated the headset includes a display element 210 for each eye of a user. In some embodiments, a display element 210 generates image light that is provided to an eyebox of the headset 200. The eyebox is a location in space that an eye of the user occupies while wearing the headset 200. For example, a display element 210 may be a waveguide display. A waveguide display includes a light source (e.g., a two-dimensional source, one or more line sources, one or more point sources, etc.) and one or more waveguides. Light from the light source is in-coupled into the one or more waveguides which outputs the light in a manner such that there is pupil replication in an eyebox of the headset 200. In-coupling and/or outcoupling of light from the one or more waveguides may be done using one or more diffraction gratings. In some embodiments, the waveguide display includes a scanning element (e.g., waveguide, mirror, etc.) that scans light from the light source as it is in-coupled into the one or more waveguides. Note that in some embodiments, one or both of the display elements 210 are opaque and do not transmit light from a local area around the headset 200. The local area is the area surrounding the headset 200. For example, the local area may be a room that a user wearing the headset 200 is inside, or the user wearing the headset 200 may be outside and the local area is an outside area. In this context, the headset 200 generates VR content. Alternatively, in some embodiments, one or both of the display elements 210 are at least partially transparent, such that light from the local area may be combined with light from the one or more display elements to produce AR and/or MR content.
In some embodiments, a display element 210 does not generate image light, and instead is a lens that transmits light from the local area to the eyebox. For example, one or both of the display elements 210 may be a lens without correction (non-prescription) or a prescription lens (e.g., single vision, bifocal and trifocal, or progressive) to help correct for defects in a user's eyesight. In some embodiments, the display element 210 may be polarized and/or tinted to protect the user's eyes from the sun.
Note that in some embodiments, the display element 210 may include an additional optics block (not shown). The optics block may include one or more optical elements (e.g., lens, Fresnel lens, etc.) that direct light from the display element 210 to the eyebox. The optics block may, e.g., correct for aberrations in some or all of the image content, magnify some or all of the image, or some combination thereof.
The DCA determines depth information for a portion of a local area surrounding the headset 200. The DCA includes one or more imaging devices 220 and a DCA controller (not shown in
The DCA controller computes depth information for the portion of the local area using the captured images and one or more depth determination techniques. The depth determination technique may be, e.g., direct time-of-flight (ToF) depth sensing, indirect ToF depth sensing, structured light, passive stereo analysis, active stereo analysis (uses texture added to the scene by light from the illuminator 225), some other technique to determine depth of a scene, or some combination thereof.
The DCA may include an eye tracking unit that determines eye tracking information. In other embodiments, the eye tracking unit is separate from the DCA. The eye tracking information may comprise information about a position and an orientation of one or both eyes (within their respective eye-boxes). The eye tracking unit may include one or more cameras. The eye tracking unit estimates an angular orientation of one or both eyes based on images captures of one or both eyes by the one or more cameras. In some embodiments, the eye tracking unit may also include one or more illuminators that illuminate one or both eyes with an illumination pattern (e.g., structured light, glints, etc.). The eye tracking unit may use the illumination pattern in the captured images to determine the eye tracking information. The headset 200 may prompt the user to opt in to allow operation of the eye tracking unit. For example, by opting in the headset 200 may detect, store, images of the user's any or eye tracking information of the user.
The audio system provides audio content. The audio system includes a sensor array, a speaker array, and an audio controller 230. However, in other embodiments, the audio system may include different and/or additional components. Similarly, in some cases, functionality described with reference to the components of the audio system can be distributed among the components in a different manner than is described here. For example, some or all of the functions of the controller may be performed by a remote server.
The speaker array presents sound to the user. The speaker array includes one or more audio assemblies. As shown in
The sensor array detects sounds within the local area of the headset 200. The sensor array includes a plurality of acoustic sensors 245. An acoustic sensor 245 captures sounds emitted from one or more sound sources in the local area (e.g., a room). Each acoustic sensor is configured to detect sound and convert the detected sound into an electronic format (analog or digital). The acoustic sensors 245 may be acoustic wave sensors, microphones, sound transducers, or similar sensors that are suitable for detecting sounds.
In
In some embodiments, each of the output ports 250 faces an interior of the frame 205. As used herein, the interior is a direction facing the head of the user wearing the headset 200, while the exterior is the direction facing away from the head of the user wearing the headset 200.
The audio system may also be included in or integrated with devices other than a headset, according to some embodiments. For example, the audio system may be integrated with a mobile device, or any other application requiring a small, light-weight speaker with relatively efficient audio performance.
The sensor array detects sounds within the local area of the headset 200. The sensor array includes a plurality of acoustic sensors 245. An acoustic sensor 245 captures sounds emitted from one or more sound sources in the local area (e.g., a room). Each acoustic sensor is configured to detect sound and convert the detected sound into an electronic format (analog or digital). The acoustic sensors 245 may be acoustic wave sensors, microphones, sound transducers, or similar sensors that are suitable for detecting sounds.
In some embodiments, one or more acoustic sensors 245 may be placed in an ear canal of each ear (e.g., acting as binaural microphones). In some embodiments, the acoustic sensors 245 may be placed on an exterior surface of the headset 200, placed on an interior surface of the headset 200, separate from the headset 200 (e.g., part of some other device), or some combination thereof. The number and/or locations of acoustic sensors 245 may be different from what is shown in
The audio controller 230 processes information from the sensor array that describes sounds detected by the sensor array. The audio controller 230 may comprise a processor and a computer-readable storage medium. The audio controller 230 may be configured to generate direction of arrival (DOA) estimates, generate acoustic transfer functions (e.g., array transfer functions and/or head-related transfer functions), track the location of sound sources, form beams in the direction of sound sources, classify sound sources, generate sound filters for the speakers 235, or some combination thereof.
The position sensor 215 generates one or more measurement signals in response to motion of the headset 200. The position sensor 215 may be located on a portion of the frame 205 of the headset 200. The position sensor 215 may include an inertial measurement unit (IMU). Examples of position sensor 215 include: one or more accelerometers, one or more gyroscopes, one or more magnetometers, another suitable type of sensor that detects motion, a type of sensor used for error correction of the IMU, or some combination thereof. The position sensor 215 may be located external to the IMU, internal to the IMU, or some combination thereof.
In some embodiments, the headset 200 may provide for simultaneous localization and mapping (SLAM) for a position of the headset 200 and updating of a model of the local area. For example, the headset 200 may include a passive camera assembly (PCA) that generates color image data. The PCA may include one or more RGB cameras that capture images of some or all of the local area. In some embodiments, some or all of the imaging devices 220 of the DCA may also function as the PCA. The images captured by the PCA and the depth information determined by the DCA may be used to determine parameters of the local area, generate a model of the local area, update a model of the local area, or some combination thereof. Furthermore, the position sensor 215 tracks the position (e.g., location and pose) of the headset 200 within the room. Additional details regarding the components of the headset 200 are discussed below in connection with
Some embodiments of the headset 200 and audio system have different components than those described here. For example, the enclosure 240 may include a different configuration of ports, for example, with a different number, shape, type, and/or size of ports. The example of the audio system shown in
In some embodiments, the rear portion of the enclosure 360 is a part of the frame of the headset, such that the temple portion of the frame 330 and the rear portion of the enclosure 360 form one continuous body. In other embodiments, the front portion of the enclosure 340 is a part of the frame of the headset. In the example shown in
The speaker 320 may be contained by the enclosure 310 integrated into the temple portion of the frame 330, in a way that is optimal for the space and size constraints of the frame. The shape of a speaker in the audio system may be configured to optimize the audio performance of the audio system, for the size and space constraints of the frame of the headset.
The speaker 320 is contained by the enclosure 310 and positioned in a space between the rear portion of the enclosure 360 and the front portion of the enclosure 340. The enclosure 310 forms the rear cavity and the front cavity, with the speaker 320 separating the rear cavity from the front cavity. The output port 350 is coupled to the front cavity. The enclosure 310 may include additional components than shown in
The enclosure 310 may have a small form factor (e.g. a total volume of the enclosure may be less than 5 cc) so as to be more easily integrated into, e.g., artificial reality headsets, than enclosures for audio systems with a larger size. In some embodiments, a geometry of the acoustic metamaterial of the rear cavity (e.g., one or more channels) may be shaped based on the form factor of the enclosure 310. For example, although
In other embodiments, one or more of the channels 370 may be curved or coiled. For example, in some embodiments, one or more of the channels may be formed in a shape of a coil, e.g., coiled around a periphery of the speaker or the transducer, or to form a spiral shape. In some embodiments, one or more of the channels are arranged to have openings facing the transducer of the speaker, with subsequent bends to form a coil in a plane parallel to a plane of the transducer of the speaker, reducing the dimensions of the audio assembly in a direction orthogonal to the membrane compared to if the channels were instead formed to extend straight back away from the transducer.
As shown in
Virtual Volume Increase and Attenuating Resonances Using Rear Cavity Channels
As discussed above in relation to
where Cab corresponds to an acoustic compliance of the rear cavity, Ceq corresponds to an equivalent compressibility of air within the rear cavity, V corresponds to a volume of the rear cavity, and Zab corresponds to an impedance of the rear cavity for a given frequency (e.g., corresponding to angular frequency ω).
When considering the thermal viscous effect introduced by the plurality of channels, the equivalent compressibility of air within the rear cavity Ceq is greater than the non-viscous compressibility of air, resulting in a larger compliance Cab and a smaller impedance Zab, allowing for larger displacement of the speaker membrane and louder sound. The effect of the channels on the equivalent compressibility Ceq is based on the cross-section of the channels. For example, in some embodiments, the equivalent compressibility Ceq is characterized by Equation (3) below (e.g., an expanded Taylor series):
where a and w correspond to the cross-sectional dimensions of the channels (e.g., length and width), ak=(k+½)π/a and βk=(k+½)π/w are constants based on the channel dimensions a and w, v′ is a function of air viscosity, thermal conductivity, and specific heat at constant volume, P0 represents air pressure, and γ represents ratio of specific heat. In other words, the equivalent compressibility Ceq is increased as the cross-sectional dimensions of the channels decreases, due to the thermal viscous effect of air within the channels, translating to higher larger compliance Cab and lower impedance Zab.
By structuring the rear cavity of the audio assembly to include a plurality of narrow channels, the rear cavity of the speaker is virtually increased for low frequency audio (e.g., a bass portion of the audio), relative to a conventional closed back speaker, e.g., by increasing the effective compressibility of the rear cavity at low frequencies, which increases the compliance of the transducer. For example,
In some embodiments, acoustic metamaterials may be used as an absorption material for attenuating unwanted resonances. In some embodiments, speakers in tiny cavities tend to develop some unwanted resonances above the resonant frequency of the speaker, e.g., in the range of 1 kHz to 6 kHz (separate from the speaker resonance, which is usually in the range of 50 Hz to 2000 Hz). These unwanted resonances will introduce harsh sound and distortions. By utilizing the thermal viscous effect from these tubes, these unwanted resonances can be attenuated. For example, in some embodiments, the acoustic impedance of the metamaterial is tunable by tuning the resonances of the resonators it's composed of (quarter-wavelength resonators, for example), where if the impedance of the metamaterial is matched to the impedance of a medium on the other side of the interface (e.g., other side of the membrane) by carefully designing the effective width and length of the resonant structures, sound will be absorbed at the interface. In some embodiments, a broadband absorber may be obtained by coupling resonators with different resonances. For example, as discussed above, the acoustic material of the back cavity may form an array of channels, where each channel corresponds to a quarter-wavelength resonator for specific frequencies. By selecting a number and length of the channels of the array to cover a range of different frequencies and to match an impedance over such frequency range of a medium in front of the speaker from which sound transmits, absorption of sound waves over different frequencies can be achieved.
For example, in some embodiments, an absorption coefficient (A) representing how much sound energy is absorbed by the metamaterial may be determined as a function the impedance of the surrounding air or surrounding acoustic material (Z0) and an impedance of the metamaterial (Zmeta), such as that given in Equation (4):
where the metamaterial impedance Zmeta is based on the number and dimensions of the channels in the array (see Equation (5), where each channel has an impedance Zi, which may be determined as a function of the total number of channels, a filling ratio of the channels (e.g., a ratio indicating an amount of the back cavity occupied by the metamaterial), a measure of thermal viscosity, and an effective length of the channel.
In some embodiments, the channels of the acoustic metamaterial are selected such that the metamaterial impedance Zmeta matches that of the surrounding air Z0, to achieve total absorption of high frequencies (e.g., A=1), to prevent or reduce high-frequency resonances caused by finite-sized rear cavities.
In some embodiments, the metamaterial may be implemented as one or more open-ended tubes, to increase an effective density of the transmission medium, hence increasing the effective mass. In some embodiments, the effective density for different audio frequencies may be adjusted by varying the dimensions of the channel.
As such, as described above, in some embodiments, the rear cavity of a speaker may be implemented as a plurality of channels in order to “virtually” increase a volume of the rear cavity (e.g., by increasing compliance, decreasing impedance), attenuate high frequency resonances, and/or increase effective density, where the number and depths of the channels are selected based on one or more desired resonance frequencies and to achieve a desired impedance value. For example, in some embodiments, the channels of the rear cavity are formed to virtual increase the volume of the rear cavity for low frequency audio (e.g., a bass portion of the audio), while also serving as broadband absorber for high frequencies.
In some embodiments, for a given fixed rear cavity volume, the number, length, and cross-sections of the channels may be selected to exhibit certain desired characteristics. For example, as discussed above, the thermal viscous effect of the channels is based on the cross-sections of the channels, while the lengths of the channels may be selected to attenuate desired resonance frequencies. If total volume and channel cross-section are fixed, then the total length of the channels may be bounded, where choice of number and length of channels may be based on the resonant frequency of the speaker and desired frequencies to attenuate, as well as ease of fabrication and cost.
Additional Acoustic Metamaterial Applications
In some embodiments, audio metamaterials may be used as part of the duct/porting/horn of a speaker, to increase a broadband sound pressure level output and make the speaker more efficient.
In some embodiments, audio metamaterials may be used to attenuate and/or isolate vibration to avoid vibration contamination. This may be useful in isolating speakers coupled to a frame of a headset to mitigate vibrations caused by the speakers from being transmitted to the frame. In some embodiments, audio metamaterials may be used to isolate the vibration of a loudspeaker embedded on a head-worn device and to prevent contamination of the signal picked up by other sensors such as IMU, cameras, microphones, MEMS actuators, etc.
In some embodiments, the acoustic metamaterials may be used to form an acoustic diode. An acoustic diode is a one-way acoustic transmission device, and in some embodiments, be configured to attenuate noise and mitigate the occlusion effect. Occlusion effect occurs when one's ear-canal is blocked using an in-ear device (e.g., hearing aid or a hearable device). In such cases, the acoustic metamaterials may be used to create a one-way valve, letting the low-frequency occlusion sound to escape from the ear-canal, and avoiding the environmental sound or noises to get into the ear-canal. For example, in some embodiments, occlusion problems happens mostly below 1 kHz, and the acoustic feedback issues happen in above 1 kHz. By using this metamaterial acoustic diode, the noise, the occlusion, and the acoustic feedback issue may be mitigated all at once.
In some embodiments an audio device may include a structure with or without additional sound sources to break the reciprocity to form a one-way broad-band sound transmission valve (acoustic diode), which will attenuate the background noise but not introduce occlusion effect to the user. With careful design on tuning the frequency range, the leaked-out sound shall not contaminate the external microphone on the hearing device to cause feedback problems. In some embodiments, an in-ear device may be configured to fit at least partially within and occlude an ear canal of a user. The in-ear device includes acoustic metamaterial configured to transmit sound (e.g., low frequency sound, e.g., self-generated sounds like vocalization, chewing, etc.) from within the ear canal to a local area outside of the ear canal, and block sound from passing from the local area through the in-ear device into the ear canal.
In some embodiments, acoustic metamaterials are engineered to serve in different audio/media devices to enhance the audio quality and reduce the total power consumption of these device. Power consumption is reduced or improved, because the user can get passive sound amplification using these devices. For example, in an audio device comprising a transducer and a passive amplifier, the transducer is configured to generate sound, while the passive amplifier is composed of a plurality of shapes that together form an acoustic metamaterial, where the acoustic metamaterial is configured to amplify a bass portion of the sound (typically below 1 kHz) to form audio content that is presented to a user.
Note that in some cases sound is generated by a speaker, and the sound is then provided via an acoustic waveguide to a port from which the sound exits the acoustic waveguide. In some instances, the acoustic waveguide can impart resonances and/or anti-resonances at various frequencies in the audible band (20 Hz to 20 kHz). In some embodiments, acoustic metamaterials may be used to correct for and/or prevent the resonances and/or anti-resonances from occurring in the sound exiting the port.
Likewise, in some embodiments, it can be useful to have a converging acoustic wave to help direct the acoustic wave towards a particular target location (e.g., to a user's ear canal). Acoustic metamaterials may be used to modify the acoustic waves in such a manner.
In some embodiments, the headset 905 may correct or enhance the vision of a user, protect the eye of a user, or provide images to a user. The headset 905 may be eyeglasses which correct for defects in a user's eyesight. The headset 905 may be sunglasses which protect a user's eye from the sun. The headset 905 may be safety glasses which protect a user's eye from impact. The headset 905 may be a night vision device or infrared goggles to enhance a user's vision at night. Alternatively, the headset 905 may not include lenses and may be just a frame with an audio system 920 that provides audio (e.g., music, radio, podcasts) to a user.
In some embodiments, the headset 905 may be a head-mounted display that presents content to a user comprising augmented views of a physical, real-world environment with computer-generated elements (e.g., two dimensional (2D) or three dimensional (3D) images, 2D or 3D video, sound, etc.). In some embodiments, the presented content includes audio that is presented via an audio system 920 that receives audio information from the headset 905, the console 915, or both, and presents audio data based on the audio information. In some embodiments, the headset 905 presents virtual content to the user that is based in part on a real environment surrounding the user. For example, virtual content may be presented to a user of the eyewear device. The user physically may be in a room, and virtual walls and a virtual floor of the room are rendered as part of the virtual content. In the embodiment of
The audio system 920 includes one or more audio assemblies and an audio controller. For example, the audio system may include one or more audio assemblies coupled to a left side of a frame of the headset 905 and one or more audio assemblies coupled to a right side of the frame of the headset 905. Each audio assembly includes one or more speakers configured to emit sounds. Each audio assembly may also include at least some of an enclosure containing one of the one or more speakers. In some embodiments, the enclosure contains a plurality of speakers. In some embodiments, the remaining portion of the enclosure is part of the frame of the headset 905. In other embodiments the audio assembly include all of the enclosure, and the whole enclosure couples to the frame of the headset 905.
The audio assembly may be an embodiment of the audio assembly including the speakers 235 and some of each of the enclosures 240. As described above with regard to
The electronic display 925 displays 2D or 3D images to the user in accordance with data received from the console 915. In various embodiments, the electronic display 925 comprises a single electronic display or multiple electronic displays (e.g., a display for each eye of a user). Examples of the electronic display 925 include: a liquid crystal display (LCD), an organic light emitting diode (OLED) display, an active-matrix organic light-emitting diode display (AMOLED), some other display, or some combination thereof.
The optics block 930 magnifies image light received from the electronic display 925, corrects optical errors associated with the image light, and presents the corrected image light to a user of the headset 905. The electronic display 925 and the optics block 930 may be an embodiment of the display element 110. In various embodiments, the optics block 930 includes one or more optical elements. Example optical elements included in the optics block 930 include: an aperture, a Fresnel lens, a convex lens, a concave lens, a filter, a reflecting surface, or any other suitable optical element that affects image light. Moreover, the optics block 930 may include combinations of different optical elements. In some embodiments, one or more of the optical elements in the optics block 930 may have one or more coatings, such as partially reflective or anti-reflective coatings.
Magnification and focusing of the image light by the optics block 930 allows the electronic display 925 to be physically smaller, weigh less, and consume less power than larger displays. Additionally, magnification may increase the field of view of the content presented by the electronic display 925. For example, the field of view of the displayed content is such that the displayed content is presented using almost all (e.g., approximately 110 degrees diagonal), and in some cases, all of the user's field of view. Additionally, in some embodiments, the amount of magnification may be adjusted by adding or removing optical elements.
In some embodiments, the optics block 930 may be designed to correct one or more types of optical error. Examples of optical error include barrel or pincushion distortion, longitudinal chromatic aberrations, or transverse chromatic aberrations. Other types of optical errors may further include spherical aberrations, chromatic aberrations, or errors due to the lens field curvature, astigmatisms, or any other type of optical error. In some embodiments, content provided to the electronic display 925 for display is pre-distorted, and the optics block 630 corrects the distortion when it receives image light from the electronic display 925 generated based on the content.
The DCA 940 captures data describing depth information for a local area surrounding the headset 905. In one embodiment, the DCA 940 may include a structured light projector, an imaging device, and a controller. The imaging device may be an embodiment of the imaging device 120. The structured light projector may be an embodiment of the illuminator 125. The captured data may be images captured by the imaging device of structured light projected onto the local area by the structured light projector. In one embodiment, the DCA 940 may include two or more cameras that are oriented to capture portions of the local area in stereo and a controller. The captured data may be images captured by the two or more cameras of the local area in stereo. The controller computes the depth information of the local area using the captured data. Based on the depth information, the controller determines absolute positional information of the headset 905 within the local area. The DCA 940 may be integrated with the headset 905 or may be positioned within the local area external to the headset 905.
The IMU 945 is an electronic device that generates data indicating a position of the headset 905 based on measurement signals received from one or more position sensors 935. The one or more position sensors 935 may be an embodiment of the position sensor 115. A position sensor 935 generates one or more measurement signals in response to motion of the headset 905. Examples of position sensors 935 include: one or more accelerometers, one or more gyroscopes, one or more magnetometers, another suitable type of sensor that detects motion, a type of sensor used for error correction of the IMU 945, or some combination thereof. The position sensors 935 may be located external to the IMU 945, internal to the IMU 945, or some combination thereof.
Based on the one or more measurement signals from one or more position sensors 935, the IMU 945 generates data indicating an estimated current position of the headset 905 relative to an initial position of the headset 905. For example, the position sensors 935 include multiple accelerometers to measure translational motion (forward/back, up/down, left/right) and multiple gyroscopes to measure rotational motion (e.g., pitch, yaw, and roll). In some embodiments, the IMU 945 rapidly samples the measurement signals and calculates the estimated current position of the headset 905 from the sampled data. For example, the IMU 945 integrates the measurement signals received from the accelerometers over time to estimate a velocity vector and integrates the velocity vector over time to determine an estimated current position of a reference point on the headset 905. Alternatively, the IMU 945 provides the sampled measurement signals to the console 915, which interprets the data to reduce error. The reference point is a point that may be used to describe the position of the headset 905. The reference point may generally be defined as a point in space or a position related to the eyewear device's 905 orientation and position.
The IMU 945 receives one or more parameters from the console 915. As further discussed below, the one or more parameters are used to maintain tracking of the headset 905. Based on a received parameter, the IMU 945 may adjust one or more IMU parameters (e.g., sample rate). In some embodiments, data from the DCA 940 causes the IMU 945 to update an initial position of the reference point so it corresponds to a next position of the reference point. Updating the initial position of the reference point as the next calibrated position of the reference point helps reduce accumulated error associated with the current position estimated the IMU 945. The accumulated error, also referred to as drift error, causes the estimated position of the reference point to “drift” away from the actual position of the reference point over time. In some embodiments of the headset 905, the IMU 945 may be a dedicated hardware component. In other embodiments, the IMU 945 may be a software component implemented in one or more processors.
The I/O interface 910 is a device that allows a user to send action requests and receive responses from the console 915. An action request is a request to perform a particular action. For example, an action request may be an instruction to start or end capture of image or video data, start or end the audio system 920 from producing sounds, start or end a calibration process of the headset 905, or an instruction to perform a particular action within an application. The I/O interface 910 may include one or more input devices. Example input devices include: a keyboard, a mouse, a game controller, or any other suitable device for receiving action requests and communicating the action requests to the console 915. An action request received by the I/O interface 910 is communicated to the console 915, which performs an action corresponding to the action request. In some embodiments, the I/O interface 915 includes an IMU 945, as further described above, that captures calibration data indicating an estimated position of the I/O interface 910 relative to an initial position of the I/O interface 910. In some embodiments, the I/O interface 910 may provide haptic feedback to the user in accordance with instructions received from the console 915. For example, haptic feedback is provided when an action request is received, or the console 915 communicates instructions to the I/O interface 910 causing the I/O interface 910 to generate haptic feedback when the console 915 performs an action.
The console 915 provides content to the headset 905 for processing in accordance with information received from one or more of: the headset 905 and the I/O interface 910. In the example shown in
The application store 950 stores one or more applications for execution by the console 915. An application is a group of instructions, that when executed by a processor, generates content for presentation to the user. Content generated by an application may be in response to inputs received from the user via movement of the headset 905 or the I/O interface 910. Examples of applications include: gaming applications, conferencing applications, video playback applications, calibration processes, or other suitable applications.
The tracking module 955 calibrates the system environment 900 using one or more calibration parameters and may adjust one or more calibration parameters to reduce error in determination of the position of the headset 905 or of the I/O interface 910. Calibration performed by the tracking module 955 also accounts for information received from the IMU 945 in the headset 905 and/or an IMU 945 included in the I/O interface 910. Additionally, if tracking of the headset 905 is lost, the tracking module 955 may re-calibrate some or all of the system environment 900.
The tracking module 955 tracks movements of the headset 905 or of the I/O interface 910 using information from the one or more sensor devices 935, the IMU 945, or some combination thereof. For example, the tracking module 955 determines a position of a reference point of the headset 905 in a mapping of a local area based on information from the headset 905. The tracking module 955 may also determine positions of the reference point of the headset 905 or a reference point of the I/O interface 910 using data indicating a position of the headset 905 from the IMU 945 or using data indicating a position of the I/O interface 910 from an IMU 945 included in the I/O interface 910, respectively. Additionally, in some embodiments, the tracking module 955 may use portions of data indicating a position or the headset 905 from the IMU 945 to predict a future location of the headset 905. The tracking module 955 provides the estimated or predicted future position of the headset 905 or the I/O interface 910 to the engine 960.
The engine 960 also executes applications within the system environment 900 and receives position information, acceleration information, velocity information, predicted future positions, audio information, or some combination thereof of the headset 905 from the tracking module 955. Based on the received information, the engine 960 determines content to provide to the headset 905 for presentation to the user. For example, if the received information indicates that the user has looked to the left, the engine 960 generates content for the headset 605 that mirrors the user's movement in a virtual environment or in an environment augmenting the local area with additional content. Additionally, the engine 960 performs an action within an application executing on the console 915 in response to an action request received from the I/O interface 910 and provides feedback to the user that the action was performed. The provided feedback may be visual or audible feedback via the headset 905 or haptic feedback via the I/O interface 910.
Additional Configuration Information
The foregoing description of the embodiments has been presented for illustration; it is not intended to be exhaustive or to limit the patent rights to the precise forms disclosed. Persons skilled in the relevant art can appreciate that many modifications and variations are possible considering the above disclosure.
Some portions of this description describe the embodiments in terms of algorithms and symbolic representations of operations on information. These algorithmic descriptions and representations are commonly used by those skilled in the data processing arts to convey the substance of their work effectively to others skilled in the art. These operations, while described functionally, computationally, or logically, are understood to be implemented by computer programs or equivalent electrical circuits, microcode, or the like. Furthermore, it has also proven convenient at times, to refer to these arrangements of operations as modules, without loss of generality. The described operations and their associated modules may be embodied in software, firmware, hardware, or any combinations thereof.
Any of the steps, operations, or processes described herein may be performed or implemented with one or more hardware or software modules, alone or in combination with other devices. In one embodiment, a software module is implemented with a computer program product comprising a computer-readable medium containing computer program code, which can be executed by a computer processor for performing any or all the steps, operations, or processes described.
Embodiments may also relate to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, and/or it may comprise a general-purpose computing device selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a non-transitory, tangible computer readable storage medium, or any type of media suitable for storing electronic instructions, which may be coupled to a computer system bus. Furthermore, any computing systems referred to in the specification may include a single processor or may be architectures employing multiple processor designs for increased computing capability.
Embodiments may also relate to a product that is produced by a computing process described herein. Such a product may comprise information resulting from a computing process, where the information is stored on a non-transitory, tangible computer readable storage medium and may include any embodiment of a computer program product or other data combination described herein.
Finally, the language used in the specification has been principally selected for readability and instructional purposes, and it may not have been selected to delineate or circumscribe the patent rights. It is therefore intended that the scope of the patent rights be limited not by this detailed description, but rather by any claims that issue on an application based hereon. Accordingly, the disclosure of the embodiments is intended to be illustrative, but not limiting, of the scope of the patent rights, which is set forth in the following claims.
This application claims the benefit of and priority to U.S. Provisional Application No. 63/188,025, filed May 13, 2021, the entire contents of which are hereby incorporated by reference for all purposes as if fully set forth herein.
Number | Name | Date | Kind |
---|---|---|---|
20210264889 | Mathur | Aug 2021 | A1 |
20220159370 | Shumard | May 2022 | A1 |
20230055494 | Clark | Feb 2023 | A1 |
20230061686 | Wolfl | Mar 2023 | A1 |
Number | Date | Country |
---|---|---|
WO-2020125940 | Jun 2020 | WO |
Entry |
---|
Wang Y., et al., “A Space-Coiled Acoustic Metamaterial with Tunable Low-Frequency Sound Absorption,” Laboratory of Science and Technology on Integrated Logistics Support, Feb. 19, 2018, 5 pages, Retrieved from Internet: URL: https://iopscience.iop.org/article/10.1209/0295-5075/120/54001/meta. |
Number | Date | Country | |
---|---|---|---|
63188025 | May 2021 | US |