--
The present invention relates to an electronic musical instrument that is tonally controlled by the acoustic volume defined by a user's mouth and lips.
Most people use the mouth and lips to make sounds every day, including speaking as well as musical sounds like singing and whistling. This is done by expelling air from the lungs, which can create an audible vibration by exciting the vocal chords (as in singing or speaking) or by creating air turbulence in the mouth (as in whistling). Our ability to control the shape of the lips, the degree to which the jaw is held open, the lift of the soft palate within the mouth, the placement of the tongue, and numerous other muscular movements are developed early in life allowing humans to precisely control the character of the audible vibration (including its frequency and harmonic content) for speaking, singing and whistling.
Musical instruments allow us to produce sounds beyond the limits of speaking, singing, and whistling, greatly expanding our capacity to produce music in a wide variety of forms. Playing even a simple musical instrument, for example, mastering a piano keyboard or the finger positions of a guitar, often requires many years of practice, and being able extemporize with such instruments is beyond the reach of all but a few highly skilled.
The present inventor has recognized that the shape of the human mouth provides an extremely intuitive and precise control source for a musical instrument using a muscle set that is extremely familiar and practiced. Accordingly, the present invention provides a musical instrument controlled in at least one important dimension, such as frequency, by a derived acoustic volume of the user's mouth.
Specifically, in one embodiment, the invention provides a musical instrument having a speaker for receiving a first electrical signal to produce a corresponding audio output, a microphone for receiving an audio input to produce a corresponding second electrical signal, and a mouthpiece providing an acoustic coupling between a user's oral cavity and the speaker and microphone. A circuit communicates with the speaker and microphone to (a) detect an acoustic resonant frequency of the oral cavity; and (b) provide a musical output based on that acoustic resonant frequency.
It is thus a feature of at least one embodiment of the invention to provide a new musical instrument that can be intuitively controlled by mouth volume.
The instrument may include at least one finger actuable key providing an electrical switch controlling the output during activation of the speaker.
It is thus a feature of at least one embodiment of the invention to introduce precise timing control of musical notes beyond frequency or sound timbre controlled through an intuitive striking of a key.
In some embodiments the instrument may include three finger-actuable keys each providing an electrical switch controlling the output during activation of the speaker to control different frequencies of the output.
It is thus a feature of at least one embodiment of the invention to add additional dimensions of musical control, for example, harmony or spectral shading, through additional buttons.
The instrument may further include an envelope generator for providing amplitude modulation of the output according to an envelope triggered by the key.
It is thus a feature of at least one embodiment of the invention to emulate the envelope shapes of a variety of types of instruments by rapid timing control of the timing control of acoustic volume.
The musical output may provide a signal having a fundamental frequency that is mapped monotonically to the acoustic resonant frequency of the oral cavity.
It is thus a feature of at least one embodiment of the invention to provide an intuitive mapping between acoustic volume and note. In one embodiment, the fundamental frequency and acoustic resonant frequency are related by a factor of 2N where N is an integer, that is they are the same note being either identical or one or more octaves apart.
It is thus a feature of at least one embodiment of the invention to provide a musical output approximating that which would be provided by an individual whistling and thus to make use of that intuitive connection to facilitate learning.
The musical instrument may include a second speaker receiving the output to provide an acoustic signal perceivable by a human.
It is thus a feature of at least one embodiment of the invention to allow the musical instrument to be fully self-contained without separate amplifiers or synthesizers for convenient use.
The musical output may provide only quantized note frequencies in a standard musical scale.
It is thus a feature of at least one embodiment of the invention to simplify the production of music by those without perfect pitch and to assist in rapid note creation.
The speaker and a microphone may have substantially parallel axes of maximum output and sensitivity respectively.
It is thus a feature of at least one embodiment of the invention to arrange the microphone and speaker to minimize cross talk allowing more sensitivity to the acoustic volume within the mouth.
The speaker and microphone maybe separated from each other and not from the oral cavity by a baffle between the speaker and microphone.
It is thus a feature of at least one embodiment of the invention to allow the speaker and microphone to be displaced outside of the mouth for convenient manufacture while minimizing cross talk.
The mouthpiece may provide a lip support ridge extending outwardly along a direction from the speaker and microphone to the oral cavity from a stop surface so that the lip may extend between a user's lips when the stop surface abuts a front of the user's lips.
It is thus a feature of at least one embodiment of the invention to promote consistent placement mouth to improve the consistency of performance of the instrument.
Front surfaces of the speaker and microphone may be recessed behind a further forward extent of the lip support ridge by less than ½ inch.
It is thus a feature of at least one embodiment of the invention to minimize the acoustic volume associated with the mouthpiece to improve a range of performance (for example, frequency range) of the musical instrument.
The speaker and microphone maybe separated by less than one quarter of an inch.
It is thus a feature of at least one embodiment of the invention to provide an extremely compact mouthpiece with good coupling to the oral cavity.
The electronic circuit may modify an amplitude of the first signal according to an amplitude of the second signal to promote a constant amplitude of the second signal during changes in the acoustic volume.
It is thus a feature of at least one embodiment of the invention to accommodate variations in attenuation of the mouth cavity at different volumes.
These particular objects and advantages may apply to only some embodiments falling within the claims and thus do not define the scope of the invention.
Referring now to
A natural frequency of a physical system is the frequency at which the system tends to vibrate after impulse excitation. For a mass-spring system, there is a single natural frequency ω0 that is a function of the mass m and the spring constant k. For the air-based system of the Helmholtz resonator, the mass m is roughly proportional to the volume of the neck 20 and opening 18, and the spring constant k is an inverse function of the volume of air in the cavity 16. Thus, in a Helmholtz resonator, the natural frequency wo can be varied by varying the size of the neck 20, opening 18, and cavity 16. In a human being, these quantities can be varied by changing the volume of the oral cavity, e.g. how the lips are shaped, moving the jaw, moving the soft palate, and so forth to create openings, necks, and cavities of different sizes and shapes. As used herein, the “oral cavity” refers to the space inside the lips and cheeks above the tongue and below the bony roof of the mouth.
Referring now to
Referring now to
A microphone 50 is simultaneously held up to the oral cavity 10 and lips 44, also separated by the mouthpiece 48 or physical filter. The microphone 50 receives an acoustic pressure wave modified by the movement of air within the oral cavity 10 and lips 44, which has been excited by the audio signal from the speaker 46 and produces a corresponding electrical microphone signal 54. The amplitude and phase of this microphone signal 54 will vary depending on how close the speaker signal 63′ is to ω0, this relationship being shown in
The controller 52 receives both the microphone signal 54 and the VCO output 63 (prior to the automatic gain control circuit 70 to provide more constant amplitude) as a first and second input. In one embodiment, the controller 52 may implement a phase-locked loop. The microphone signal 54 and generated VCO output 63 are compared by a phase detector 58. The phase detector generates a phase-error signal that represents the phase differential between the two signals. The phase error is then filtered by a loop low pass filter or filter system such as a PID (proportional/integral/derivative) controller. A (PID) controller responds to the phase error adjusting output based on the size of the phase there (proportional), the way the phase error accumulates over time (integral), and the rate of change of the phase error (derivative). The loop filter 60 controls the dynamics of the phase locking, that is, how quickly and over what range the loop will operate to try to match the phase of the VCO output 63 and the microphone signal 54.
The output of the loop filter 60 provides a VCO input 59 that drives a voltage-controlled oscillator (VCO) 62. The VCO 62 converts the VCO input 59 from the loop filter 60 into a VCO output 63, typically a sine wave, to simplify the analysis of phase by the phase detector 58. This generated VCO output 63 is fed to the first speaker 46 through the automatic gain control circuit 70 thus closing a loop with the microphone 50. This feedback loop adjusts the VCO output 63 to achieve a desired phase difference between microphone signal 54 and VCO output 63 associated with a close match to wo. The natural resonant frequency of the oral cavity 10 and lips 12 is achieved, at which point the controller 52 has a stable “lock” on the natural frequency ω0. If the user 42 changes the configuration of the oral cavity 10 and lips 12, coo will also change (as described in
A value of the frequency of the operation of the VCO 62, for example, derived from the VCO input 59, may be used to control an automatic gain control circuit 70 controlling the amplitude of the speaker signal 63′ to the speaker 46 according to a lookup table in the automatic gain control circuit 70. The values of the lookup table are selected so as to provide a constant received amplitude at the microphone 50 for improved comparison by the phase comparator 58 to the extent that the amplitudes of the VCO output 63 and microphone signal 54 remain more constant.
The particular lock frequency, again as may be indicated either by the frequency of the VCO output 63 of the VCO 62 or the value of the VCO input 59 received by the VCO 62, is then provided to a note selector/synthesizer 64.
In one embodiment, a note selector/synthesizer 64 is preprogrammed with a selected set of desired frequencies, for instance, the frequencies present in the key of G major. The note selector/synthesizer 64 compares the frequency of the VCO 62 to the frequencies in the selected set, and selects the closest frequency to generate a musical note output 61. Thus, the frequency selector can automatically quantize the user-created natural frequency ω0 of the oral cavity 10 and lips 12 to the closest musical note. Generally, this quantization may employ frequencies from any standard scale including equally tempered major and minor scales, 12 tone chromatic scales, as well as reduced note scales such as pentatonic, hexatonic, heptatonic, and the like so that the frequency of the VCO 62 is mapped monotonically to the frequency of the musical note output 61. The scales with reduced numbers of notes provide faster and more certain note selection. Stability of note selection may also be aided by the introducing hysteresis in the note selection process providing individual notes with a degree of “stickiness” where they resist changing to a subsequent different note by requiring a larger frequency distance between the VCO frequency and the current note than between the VCO frequency and the next note. Importantly, by proper selection of the frequency set, just intonation or equal tempering may alternatively be provided. For certain purposes the quantization may be disabled allowing smooth glissendo between notes. In each of the equal tempered scales, intervals between notes may be a multiple of the 12th root of 2 raised to the power of N where N is an integer. The frequency basis for any scale may be, for example, with reference to standard concert tuning such as A=440 Hz but any tuning may be adopted as desired. In one embodiment, the relationship between the VCO output 63 and the musical note output 61 may be according to an interval of a unison or integer number of octaves (2N where N is an integer), to provide consonance between the sound heard by the user 42 from the speaker 46 and the note actually being produced for more intuitive playing.
Whereas the VCO output 63 may preferably be a sine wave to simplify the phase analysis by the phase detector 58, the musical note output 61 may be an arbitrary wave shape selected for aesthetic qualities including sine wave, triangle wave, sawtooth wave, pulse wave and wave table waves of arbitrary shape. In this respect the note selector/synthesizer 64 may act as a synthesizer receiving a guiding note value and synthesizing a desired note output signal according to any well-known synthesis technique including FM synthesis, procedural synthesis and the like.
In one embodiment, an envelope modulator 66 receives the output from the note selector/synthesizer 64. The envelope modulator may include an envelope generator 65, which generates an envelope of varying amplitude that approximates the varying amplitude found in a note made by a desired musical instrument or by human vocal cords, that is, by controlling the attack, delay, sustain, and release of the note as is understood in the art. A modulator 67 multiplies the generated envelope with the musical note output 61 received by the envelope modulator 66, thus creating an envelope-modulated signal.
A control button 69 can control this process, for instance, causing a sequencing through attack, delay, sustain, and release to only occur when the control button 69 is actuated. In this respect, the control button 69 provides the timing of the note in the manner that a key on a piano provides the timing of the note. This allows the user 42 to change the note described by the oral cavity 10 without changing the output node from speaker 68 permitting rapid transitions in the frequency output from the speaker 68 while providing good locking characteristics of the phase lock loop of the controller 52. Other controls may be provided to allow adjustment of the envelope or to cycle through multiple different envelope shapes.
After processing has been completed to the desired extent (if at all), a second amplifier speaker 68 receives the ultimate output signal and audibly broadcasts it. Alternatively, the VCO input 59 or VCO output 63 may be used as a control voltage or control signal for an external synthesizer, or maybe converted to a M IDI signal by appropriate circuitry or to a control voltage for standard synthesizer modules.
Referring now to
When multiple control buttons 69a-69c are provided each of these may be associated with a different note selector/synthesizer 64 or a different voice of a single note selector/synthesizer 64 and different envelope generator 66 to produce separate notes, for example, related to the VCO output 63. In one embodiment one-button 69a may produce a note having a fundamental frequency matching the note of the VCO 62 and the other two buttons may produce notes that are proper for a chord embracing the note of button 62a, for example, having an interval of ⅓ and ⅕ for a major chord. Alternatively, the buttons 62b, 62c may modify the volume, tone, harmonic content, or the like of the note selector/synthesizer 64, for example, in the manner of a Wawa pedal.
Referring now to
Alternatively and referring to
Referring now to
The mouthpiece 72 may be constructed of multiple parts so that the rim 92 and plate of the lip abutment surface 90 may be separated from the speakers 46 and microphone 50 to be replaced or washed or the like as desired, for example, and releasably held by using releasable attachment elements such as magnets 102 interacting with corresponding magnets 104 on the case 80.
The term “musical output” as described herein is an acoustic output or electrical signal (such as MIDI) intended to control an acoustic output suitable for playing music. Generally, this acoustic output will have a fundamental frequency from 60 to 3000 Hz.
Certain terminology is used herein for purposes of reference only, and thus is not intended to be limiting. For example, terms such as “upper”, “lower”, “above”, and “below” refer to directions in the drawings to which reference is made. Terms such as “front”, “back”, “rear”, “bottom” and “side”, describe the orientation of portions of the component within a consistent but arbitrary frame of reference which is made clear by reference to the text and the associated drawings describing the component under discussion. Such terminology may include the words specifically mentioned above, derivatives thereof, and words of similar import. Similarly, the terms “first”, “second” and other such numerical terms referring to structures do not imply a sequence or order unless clearly indicated by the context.
When introducing elements or features of the present disclosure and the exemplary embodiments, the articles “a”, “an”, “the” and “said” are intended to mean that there are one or more of such elements or features. The terms “comprising”, “including” and “having” are intended to be inclusive and mean that there may be additional elements or features other than those specifically noted. It is further to be understood that the method steps, processes, and operations described herein are not to be construed as necessarily requiring their performance in the particular order discussed or illustrated, unless specifically identified as an order of performance. It is also to be understood that additional or alternative steps may be employed.
References to “a microprocessor” and “a processor” or “the microprocessor” and “the processor,” can be understood to include one or more microprocessors that can communicate in a stand-alone and/or a distributed environment(s), and can thus be configured to communicate via wired or wireless communications with other processors, where such one or more processor can be configured to operate on one or more processor-controlled devices that can be similar or different devices. Furthermore, references to memory, unless otherwise specified, can include one or more processor-readable and accessible memory elements and/or components that can be internal to the processor-controlled device, external to the processor-controlled device, and can be accessed via a wired or wireless network.
It is specifically intended that the present invention not be limited to the embodiments and illustrations contained herein and the claims should be understood to include modified forms of those embodiments including portions of the embodiments and combinations of elements of different embodiments as come within the scope of the following claims. All of the publications described herein, including patents and non-patent publications, are hereby incorporated herein by reference in their entireties
To aid the Patent Office and any readers of any patent issued on this application in interpreting the claims appended hereto, applicants wish to note that they do not intend any of the appended claims or claim elements to invoke 35 U.S.C. 112(f) unless the words “means for” or “step for” are explicitly used in the particular claim.
This application claims the benefit of U.S. provisional application 62/889,297 filed Aug. 20, 2019, and hereby incorporated by reference.
Number | Name | Date | Kind |
---|---|---|---|
3571480 | Tichenor | Mar 1971 | A |
3878748 | Spence | Apr 1975 | A |
RE29010 | Spence | Oct 1976 | E |
5760324 | Wakuda | Jun 1998 | A |
Number | Date | Country |
---|---|---|
102018000554 | Sep 2018 | DE |
Entry |
---|
Rothman, Volume Determination Using Acoustic Resonance, Johns Hopkins APL Technical Digest, vol. 12, No. 2 (Year: 1991). |
Number | Date | Country | |
---|---|---|---|
62889297 | Aug 2019 | US |