The present disclosure generally relate to systems and methods for generating an audio signal. In some examples the system and methods of generating an audio signal are applied in a mobile, wearable, or portable device. In other examples the system and methods of generating an audio signal are applied in earphones, headsets, hearables, or hearing aids.
U.S. Pat. No. 8,861,752 describes a picospeaker which is a novel sound generating device and a method for sound generation. The picospeaker creates an audio signal by generating an ultrasound acoustic beam which is then actively modulated. The resulting modulated ultrasound signal has a lower acoustic frequency sideband which corresponds to the frequency difference between the frequency of the ultrasound acoustic beam and the modulation frequency. US 20160360320 and US 20160360321 describe MEMS architectures for realizing the picospeaker. US 20160277838 describes one method of implementation of the picospeaker using MEMS processing. US 20160277845 describes an alternative method of implementation of the picospeaker using MEMS processing.
State of art approaches to realizing the picospeaker are complex and require many processing steps. Hence it is desirable to provide an architecture and method of implementation which reduces the complexity and number of processing steps.
“acoustic signal”—as used in the current disclosure means a mechanical wave traversing either a gas, liquid or solid medium with any frequency or spectrum portion between 10 Hz and 10,000,000 Hz.
“audio” or “audio spectrum” or “audio signal”—as used in the current disclosure means an acoustic signal or portion of an acoustic signal with a frequency or spectrum portion between 10 Hz and 20,000 Hz.
“speaker” or “pico speaker” or “micro speaker” or “nano speaker”—as used in the current disclosure means a device configured to generate an acoustic signal with at least a portion of the signal in the audio spectrum.
“membrane”—as used in the current disclosure means a flexible structure constrained by at least two points.
“blind”—as used in the current disclosure means a structure with at least one acoustic port through which an acoustic wave traverses with low loss.
“shutter”—as used in the current disclosure means a structure configured to move in reference to the blind and increase the acoustic loss of the acoustic port or ports.
“acoustic medium”—as used in the current disclosure means any of but not limited to; a bounded region in which a material is contained in an enclosed acoustic cavity; an unbounded region where in which a material is characterized by a speed of sound and unbounded in at least one dimension. Examples of acoustic medium include but are not limited to; air; water; ear canal; closed volume around ear; air in free space; air in tube or other acoustic channel.
Some embodiments of the present disclosure may generally relate to a speaker device that includes a membrane and a shutter. The membrane is positioned in a first plane and configured to oscillate along a first directional path and at a first frequency effective to generate an ultrasonic acoustic signal. The shutter is positioned in a second plane that is substantially separated from the first plane. The shutter is configured to modulate the ultrasonic acoustic signal such that an audio signal is generated.
Other embodiments of the present disclosure may generally relate to a speaker device comprising an array of membranes and shutters. The array of membranes and shutters operate either independently or driven by a common source. Examples of drive signals include but are not limited to; pulse width modulation and modulated sinusoidal signals. The driving unit is a semiconductor integrated circuit which includes; a communication unit; a charge pump configured to generate a high voltage signal; a switching unit configured to modulate the high voltage signal. The driving unit receives a digital sound data stream and an operating voltage and outputs driving signals for the membrane, and shutter. In some embodiments the membrane and shutter operate asynchronously and or independently of each other at one or more frequencies. In other embodiments the membrane and shutter operate synchronously at the same frequency. In the synchronous mode of operation, the amplitude of the audio signal is controlled by any of but not limited to; the relative phase of the membrane and shutter operation; the amplitude of the shutter operation; the amplitude of the membrane operation; any combination of these.
The foregoing summary is illustrative only and is not intended to be in any way limiting. In addition to the illustrative aspects, embodiments, and features described above, further aspects, embodiments, and features will become apparent by reference to the drawings and the following detailed description.
The foregoing and other features of the present disclosure will become more fully apparent from the following description and appended claims, taken in conjunction with the accompanying drawings. Understanding that these drawings depict only several embodiments in accordance with the disclosure and are therefore not to be considered limiting of its scope, the disclosure will be described with additional specificity and detail through use of the accompanying drawings.
In the following detailed description, reference is made to the accompanying drawings, which form a part hereof. In the drawings, similar symbols typically identify similar components, unless context dictates otherwise. The illustrative embodiments described in the detailed description, drawings, and claims are not meant to be limiting. Other examples may be utilized, and other changes may be made, without departing from the spirit or scope of the subject matter presented here. It will be readily understood that the aspects of the present disclosure, as generally described herein, and illustrated in the figures, can be arranged, substituted, combined, and designed in a wide variety of different configurations, all of which are explicitly contemplated and make part of this disclosure. This disclosure is drawn, inter alia, to methods, apparatus, computer programs, and systems of generating an audio signal.
In an embodiment, the speaker device includes a first conductive layer with a plurality of center structures and springs; a second conductive layer with a plurality of perforations; and electrical isolation rings; a third conductive layer with a plurality of center structures and springs; and a dielectric layer. The first, second and third conductive layers are in physical contact with the dielectric layer and are electrically isolated from each other.
In some examples, a speaker device is described that includes a membrane and a shutter. The membrane is configured to oscillate along a first directional path and at a combination of frequencies with at least one frequency effective to generate an ultrasonic acoustic signal. A shutter and blind are positioned proximate to the membrane. In one non limiting example the membrane, the blind, and the shutter may be positioned in a substantially parallel orientation with respect to each other. In other examples the membrane, the blind, and the shutter may be positioned in the same plane and the acoustic signal is transmitted along acoustic channels leading from the membrane to the shutter. In a further example the modulator and or shutter are composed of more than one section.
In some embodiments, the membrane is driven by an electric signal that oscillates at a frequency Ω and hence moves at b Cos(2π*Ωt), where b is the amplitude of the membrane movement, and t is time. The electric signal is further modulated by a portion that is derived from an audio signal a(t). The acoustic signal is characterized as:
Applying a Fourier transform to Equation (1) results in a frequency domain representation
Where A(f) is the spectrum of the audio signal. Equation (2) describes a signal with an upper and lower side band around a carrier frequency of Ω. Applying to the acoustic signal of Equation (1) an acoustic modulator operating at frequency Ω results in
Where l is the loss of the modulator and m is the modulation function and due to energy conservation l+m<1. In the frequency domain
Where b/4*m A(f) is an audio signal. The remaining terms are ultrasound signals where m A(f+2Ω) is at twice the modulation frequency and A(f−Ω)+A(f+Ω) is the original unmodulated signal. Additional acoustic signals may be present due to any but not limited to the following; ultrasound signal from the shutter movement; intermodulation signals due to nonlinearities of the acoustic medium; intermodulation signals due to other sources of nonlinearities including electronic and mechanical.
In a further example the audio signal is enhanced by acoustic radiation pressure of the ultrasound signal. This is a new approach to audio generation where the audio system generates an ultrasound signal. The ultrasound signal exerts a radiation force on surfaces on which it impinges including the Tympanic membrane (ear drum). By modulating the ultrasound signal the radiation force magnitude can be changed, thereby effecting mechanical movement of the Tympanic membrane which is registered as sound by the ear (and brain). The radiation pressure of an acoustic signal is well documented and given as
Where P is the radiation pressure, and where E, p, ρ, and care energy density of the sound beam near the surface, acoustic pressure, density of the sound medium, and the sound velocity, respectively. α is a constant related to the reflection property of the surface. If all the acoustic energy is absorbed on the surface, α is equal to 1, while for the surface that reflects all the sound energy, α is 2. The sound power E carried by the beam is E=W/c where W is the power density of the transducer. In one example to effect an audio sensation at the ear drum an ultrasound signal is modulated with an audio signal. The audio signal causes changes in the acoustic radiation force which are registered as an audio signal by the ear. In one non limiting example the audio is AM modulated on the ultrasound carrier
E is proportional to m a(t) and the changes in the radiation force P are proportional to m a(t) resulting in movement of the eardrum which is proportional to m a(t). Hence an ultrasound speaker can generate sound using any or both methods described above. In one example the methods are used intermittently, in another example the methods are used concurrently, in another example only modulation or only radiation force are used.
In a further example thickness ranges (all numbers are in microns) for the layers are shown in the table
To sum we present a speaker device comprised of a first conductive layers with a plurality of center structures and springs; a second conductive layer with a plurality of perforations; and electrical isolation rings; a third conductive layer with a plurality of center structures and springs; a dielectric layer; wherein first, second and third conductive layers are in physical contact with dielectric layer and are electrically isolated from each other. In a further example the second conductive layer is physically connected to dielectric layer at its perimeter with at least 70 percent of its perimeter and restricts airflow from bottom side second conductive layer to top side of second conductive layer to substantially a set of perforations in the second conductive layer. In a further example second conductive layer is physically connected to dielectric layer at its perimeter with any of but not limited to at least 60 percent; at least 80 percent; at least 90 percent; of its perimeter and restricts airflow from bottom side second conductive layer to top side of second conductive layer to substantially a set of perforations in the second conductive layer. In a further example the conductive layers are any of but not limited to polysilicon; doped polysilicon; Al; AlCu; AlSiCu; Ni. In further example the stress in the conductive layer is tensile. In a further example the stress in the conductive layer is any of but not limited to; less than 30 Mpa; less than 50 Mpa; Less than 100 Mpa; Less than 300 Mpa. In a further example the conductance of the conductive layer is any of but not limited to less than 10 Ohm per square; less than 50 Ohm per square; less than 500 Ohm per square; less than 1 KOhm per square. In a further example the dielectric layer material is any of SiN; SiRN; Tin; TaO; TaN; AlOx; SiO2.
There is little distinction left between hardware and software implementations of aspects of systems; the use of hardware or software is generally (but not always, in that in certain contexts the choice between hardware and software can become significant) a design choice representing cost versus efficiency tradeoffs. There are various vehicles by which processes and/or systems and/or other technologies described herein can be effected (e.g., hardware, software, and/or firmware), and that the preferred vehicle will vary with the context in which the processes and/or systems and/or other technologies are deployed. For example, if an implementer determines that speed and accuracy are paramount, the implementer may opt for a mainly hardware and/or firmware vehicle; if flexibility is paramount, the implementer may opt for a mainly software implementation; or, yet again alternatively, the implementer may opt for some combination of hardware, software, and/or firmware.
The foregoing detailed description has set forth various embodiments of the devices and/or processes via the use of block diagrams, flowcharts, and/or examples. Insofar as such block diagrams, flowcharts, and/or examples contain one or more functions and/or operations, it will be understood by those within the art that each function and/or operation within such block diagrams, flowcharts, or examples can be implemented, individually and/or collectively, by a wide range of hardware, software, firmware, or virtually any combination thereof. In one embodiment, several portions of the subject matter described herein may be implemented via Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), digital signal processors (DSPs), or other integrated formats. However, those skilled in the art will recognize that some aspects of the embodiments disclosed herein, in whole or in part, can be equivalently implemented in integrated circuits, as one or more computer programs running on one or more computers (e.g., as one or more programs running on one or more computer systems), as one or more programs running on one or more processors (e.g., as one or more programs running on one or more microprocessors), as firmware, or as virtually any combination thereof, and that designing the circuitry and/or writing the code for the software and or firmware would be well within the skill of one of skill in the art in light of this disclosure. In addition, those skilled in the art will appreciate that the mechanisms of the subject matter described herein are capable of being distributed as a program product in a variety of forms, and that an illustrative embodiment of the subject matter described herein applies regardless of the particular type of signal bearing medium used to actually carry out the distribution. Examples of a signal bearing medium include, but are not limited to, the following: a recordable type medium such as a floppy disk, a hard disk drive, a Compact Disc (CD), a Digital Versatile Disk (DVD), a digital tape, a computer memory, etc.; and a transmission type medium such as a digital and/or an analog communication medium (e.g., a fiber optic cable, a waveguide, a wired communications link, a wireless communication link, etc.).
Those skilled in the art will recognize that it is common within the art to describe devices and/or processes in the fashion set forth herein, and thereafter use engineering practices to integrate such described devices and/or processes into data processing systems. That is, at least a portion of the devices and/or processes described herein can be integrated into a data processing system via a reasonable amount of experimentation. Those having skill in the art will recognize that a typical data processing system generally includes one or more of a system unit housing, a video display device, a memory such as volatile and non-volatile memory, processors such as microprocessors and digital signal processors, computational entities such as operating systems, drivers, graphical user interfaces, and applications programs, one or more interaction devices, such as a touch pad or screen, and/or control systems including feedback loops and control motors (e.g., feedback for sensing position and/or velocity; control motors for moving and/or adjusting components and/or quantities). A typical data processing system may be implemented utilizing any suitable commercially available components, such as those typically found in data computing/communication and/or network computing/communication systems.
The herein described subject matter sometimes illustrates different components contained within, or connected with, different other components. It is to be understood that such depicted architectures are merely exemplary, and that in fact many other architectures can be implemented which achieve the same functionality. In a conceptual sense, any arrangement of components to achieve the same functionality is effectively “associated” such that the desired functionality is achieved. Hence, any two components herein combined to achieve a particular functionality can be seen as “associated with” each other such that the desired functionality is achieved, irrespective of architectures or intermedial components. Likewise, any two components so associated can also be viewed as being “operably connected”, or “operably coupled”, to each other to achieve the desired functionality, and any two components capable of being so associated can also be viewed as being “operably couplable”, to each other to achieve the desired functionality. Specific examples of operably couplable include but are not limited to physically mateable and/or physically interacting components and/or wirelessly interactable and/or wirelessly interacting components and/or logically interacting and/or logically interactable components.
With respect to the use of substantially any plural and/or singular terms herein, those having skill in the art can translate from the plural to the singular and/or from the singular to the plural as is appropriate to the context and/or application. The various singular/plural permutations may be expressly set forth herein for sake of clarity.
It will be understood by those within the art that, in general, terms used herein, and especially in the appended claims (e.g., bodies of the appended claims) are generally intended as “open” terms (e.g., the term “including” should be interpreted as “including but not limited to,” the term “having” should be interpreted as “having at least,” the term “includes” should be interpreted as “includes but is not limited to,” etc.). It will be further understood by those within the art that if a specific number of an introduced claim recitation is intended, such an intent will be explicitly recited in the claim, and in the absence of such recitation no such intent is present. For example, as an aid to understanding, the following appended claims may contain usage of the introductory phrases “at least one” and “one or more” to introduce claim recitations. However, the use of such phrases should not be construed to imply that the introduction of a claim recitation by the indefinite articles “a” or “an” limits any particular claim containing such introduced claim recitation to disclosures containing only one such recitation, even when the same claim includes the introductory phrases “one or more” or “at least one” and indefinite articles such as “a” or “an” (e.g., “a” and/or “an” should typically be interpreted to mean “at least one” or “one or more”); the same holds true for the use of definite articles used to introduce claim recitations. In addition, even if a specific number of an introduced claim recitation is explicitly recited, those skilled in the art will recognize that such recitation should typically be interpreted to mean at least the recited number (e.g., the bare recitation of “two recitations,” without other modifiers, typically means at least two recitations, or two or more recitations). Furthermore, in those instances where a convention analogous to “at least one of A, B, and C, etc.” is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., “a system having at least one of A, B, and C” would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). In those instances where a convention analogous to “at least one of A, B, or C, etc.” is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., “a system having at least one of A, B, or C” would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). It will be further understood by those within the art that virtually any disjunctive word and/or phrase presenting two or more alternative terms, whether in the description, claims, or drawings, should be understood to contemplate the possibilities of including one of the terms, either of the terms, or both terms. For example, the phrase “A or B” will be understood to include the possibilities of “A” or “B” or “A and B.”. Speaker and picospeaker are interchangeable and can be used in in place of the other.
While various aspects and embodiments have been disclosed herein, other aspects and embodiments will be apparent to those skilled in the art. The various aspects and embodiments disclosed herein are for purposes of illustration and are not intended to be limiting, with the true scope and spirit being indicated by the following claims.
Number | Date | Country | |
---|---|---|---|
63433507 | Dec 2022 | US |