Musical sounds and events are indicative and reflective of human culture's perception, understanding, and production of sound, language, and meaning. Music is generally performed based on human intention by the actions of the body and also the manipulation of musical instruments, and considered a form of artistic expression. Musical instruments have evolved along with technology. Musical compositions, performances, and events may be predetermined to the extent possible by human intention (musical composition) or left to be partially or completely improvised based on human-provided structures (Indian Ragas, Jazz). Other independent sources of musical sound have also long been recognized, either from natural or animal sounds (birds singing, water moving among rocks, wind moving among structures) or environmentally stimulated musical devices produced by human ingenuity (wind chimes, Aeol's harp).
In some forms of music, acoustical and natural laws provide structure (scales, chords) but in other forms of music (mostly electronic) more general acoustic phenomena and structures (atonality, serialized tones and rhythm, noise spectra, and sound events in an environment) may be recognized as musical.
Music is mainly performed by trained artists, but sometimes the “audience” also participates in a musical event (clapping, cheering, singing along, etc.).
Human artistic determination of music (composition, improvisation) is generally accepted, but random generation and machine or computer determination are also used to alter or create musical events.
The improvement by this invention is to incorporate all of the above resources and means in an instrument that can produce musical sound, spanning the range from complete determination by an artist to expressing natural or environmental sound-determining inputs through a musical structuring device or system to produce musical events, and additionally to make musical events interactive, including participation of an audience.
Additionally, the invention utilizes the above structures and methods to provide musical events responsive to a feedback between instrument/audio input, instrument processing structure, and instrument output to the instrument's input or to an acoustic/audio environment in which the instrument's input is a part. The environment may generate the acoustic/audio input to the SDI.
A source-dependent musical instrument receives and processes an audio input signal and produces an audio output signal dependent on analysis of the input signal, a control parameter specification, internal state of the instrument, signal processing of input signal, generation and signal processing of synthesized signal, controlled feedback of the instrument output to the instrument input, and controlled feedback of output to the environment of the input. The feedback loop can also be separated to feedback within the instrument and also include the acoustic environment into which the instrument's output is radiated.
The input to the instrument may be intentional, as by manipulation by a musician; or indeterminate, as by monitoring an environmental sound source or an arbitrary input signal; or interactive, such as by monitoring a quasi-indeterminate sound source (a crowd or an audience) and providing acoustic feedback from the instrument into the environment of the sound source (dance hall or auditorium).
The control parameters specify:
One or more formats for analysis of aspects of the input signal,
audio processing of the audio input signal by delay, reverb, phase, distortion, filtering, or modulation,
generation of (synthesized) secondary audio signals based on the analysis of the input signal and the state of the control parameters by an oscillator, or digital or analog methods,
audio signal processing of the secondary signal,
audio processing of combined signals,
feedback combination of the input signal, processed input signal, and processed secondary signal,
feedback from the acoustic environment into which the output signal is transmitted,
feedback of the combined output signal to the input signal, and
feedback of output signal to environment of input signal.
Embodiments of the source-dependent musical instrument may be acoustic (sound focusing space), acoustic-mechanical (wind chime, Aeol's harp), acoustic-electroacoustic (microphone, amplifier, or speaker feedback), acoustic-electronic (analog SDI), acoustic-digital (digital SDI processing); electroacoustic (analog input)-(allsecondary_, electronic-allsecondary, digital-allsecondary).
In a computerized version of the SDI, a visual display provides indication of the system control parameters and other indicators of operation and an input device (computer or musical keyboard, mouse, trackball, instrument simulator, other), and that includes a digital encoded input (in real time) and a digital stored file input.
A control panel or alternative control device is provided. Examples include an analog control panel, keyboard, mouse, and a touch screen.
An example embodiment of the SDI is loosely based on the acoustic “sympathetic strings” implemented on musical instruments such as the sitar. For this embodiment:
The input signal, which may be intentional (as guitar played into), indeterminate (microphone input), or interactive (as crowd input/output) feeds into an analysis means (device and/or algorithm); which extracts frequency and loudness information.
The frequencies of an “input tonality” are specified by structures of single frequencies or groups of multiple frequencies.
When the input signal contains frequency components conforming to the specified input tonality, the analysis system outputs control signals according to, for example, the amplitude of each chosen frequency or tonality detected in the source material, or the onset time for a frequency component. For a tonal system the output tonality is related to the input tonality, and for non-tonal systems the output event is related to the input event.
The control signals, for example in a fixed number of channels, provide gating and/or amplitude envelope generation for allowing the synthesized signals from a corresponding number of oscillators or tone-synthesizing channels to be passed through to the output sections of the device.
An input event is the recognition of a frequency, group of frequencies, or other defined audio pattern by an “input filter” which may be a bandpass filter, comb filter, or other single or multiple filters. The configuration of all possible input events through the input filters of an SDI is the input tonality.
The tones of the secondary synthesizers, collectively termed the “output tonality”, may be generated at frequencies equal to the frequencies set for the input tonality. However they may also be set to generate tones of different frequencies specified by either a frequency ratio or any other method, (i.e. filtering or delay) or changed in predetermined or partially determined sequence or at random.
Available to the tone generators are settings for the frequency inputs and outputs for the tone generators, by which the user can adjust the sonic character of these tone generators. Also present are controls for attack-decay volume envelope as well as mixer settings for a rectangular, triangular, or sine wave.
Likewise the signal processing of the tones generated by the secondary synthesizers may be specified according to a fixed structure, or be based on analysis of the input signal, or varied in sequence or at random or according to a computational algorithm (VC filter, VCA, FFT, etc.).
An output configuration/tonality is the configuration of all possible output events created by output synthesizers.
The input signal itself may be processed by the signal processing system, either separately from the secondary synthesized signals or mixed with the secondary signals according to fixed parameter, parameters derived from the signal analysis section, etc (i.e. by filtering, compression, distortion, delay, reverberation, etc).
The processed signals may also be fed back to the input stage according to the mentioned methods, and are fed to an output stage (i.e. just attenuated or with additional processes).
A harmony compensating SDI could correct vertical relationships at each moment in a musical event, for a given tuning.
The output signal (the signal produced by the output stage) may be amplified and converted to an acoustical signal. The acoustical or digital output may be directed to an audience (radio, internet, or “live”), to a performer, and/or both according to the methods described.
The instrument may consist of a single audio or acoustical path, or multiple paths (stereo, quad, etc) with channel replication (1 channel, 4, 10, etc.) in the software.
The inputs, processing and outputs of the device may be in a single location or multiple locations connected by available communication channels (wire, wireless, internet, satellite, etc.). The channels may be single or multiple through identical SDI processes, or through different processes for each channel.
Likewise the operations determining the processing parameters may be provided by a single or multiple human operators or automata, connected by available communications channels. A single channel could be one player with a guitar, SDI, or amp-speaker, and multiple channel operation could be a group of players with the audience experiencing the produced musical events over the Internet.
In this way any musical event generated or processed by the SDI may be interactive to any degree specified.
Any of the events, signals or control parameters may be recorded using any available recording medium and technology, and any recording may be used in further signal generation and processing. The recorded material may be played back later as a musical performance, or may be used as part of material for a new or ongoing musical event.
Instead of, or in addition to analyzing the input signal according to a “tonality” or frequency series, alternative input “event” specifications may be utilized. For example, the analysis system may be specified to recognize symbols, spoken or sung words (speech recognition), or other sound sequences that are better specified by noise spectra (drums, cymbals, etc).
Likewise the secondary signal generators may generate “events” such as words, noise spectra, etc (i.e. each oscillator could be a speech synthesizer).
Secondary signal generators may also be extended to include non-audio signals or events, such as visual signals, mechanical motions, and other that can be produced by activation by electronic or digital signals such as lights, vibrations, or theatrical effects.
Input sources may also be encoded to include non-audio signals or events, transduced by appropriate transducers and analyzed by input event “filters” to generate secondary output events.
Input signals can be extended to include optical signals, electromechanical signals, temperature, pressure, humidity; in general any event or stimulus that can be converted into electronic or digital signals by a transducer. This is important for tuning the input signal to the output signal, or vice versa.
The input signals can be created intentionally by one or more users, or by environmental sources, by pre-determined programs, or combinations of these, such as instrument players, or outside traffic, etc. This could include, for example, hummingbird songs translated down from ultrasonic frequencies and used for SDI manipulation.
The analysis system parameters can be intentionally manipulated by one or more users, or by environmental sources, by predetermined programs, or combinations of these to create, for example, a multi-player instrument/event (like a game).
The same applies to the signal synthesis or event generators, and the same applies to the control of signal processing and feedback systems.
The SDI could also be made into an inexpensive, integrated circuit chip for toys (finding the music in sounds) or possibly other musical applications.
A possible use of the SDI includes feedback to and from, for example, a club environment. In this case the audience provides audio input and the operator of the instrument can “play” the SDI to correspond interactively to reactions from the audience.
Another use might be to input environmental sounds in a residence near a highway, and output sounds that create a more harmonious acoustical environment when mixed with the incoming sounds.
In summary, the operation of both the input and the control of the instrument may vary in a range including determinate, intentional, improvised, and randomly determined.
Using internet and other long-distance communication systems, the inputs, controls and outputs may be distributed at single and/or multiple locations reasonably simultaneously. Thus, large-scale musical and/or artistic events may be performed and attended by arbitrarily large and diversely located group(s) of participants, or the “audience.”
The following appended material describes other specific embodiments, details of implementation, uses and additional inventive features. All inventive features specified are believed to be enabled by currently available technology accessible to those skilled in the related fields of practice.
Although various embodiments and alternatives have been described in detail for purposes of illustration, various further modifications may be made without departing from the scope and spirit of the invention. Accordingly, no limitation on the invention is intended by way of the foregoing description and drawings, except as set forth in the claims to be appended to the non-provisional version of this disclosure.
The envelope generator 308 outputs a control signal 309 (analog, digital, etc as appropriate to the embodiment) to the control input 311 of a voltage controlled (or digitally controlled, etc) attenuator or amplifier 310. An oscillator 310 (which may be analog, digital, algorithm, etc), having a frequency determined by a control input 3112, generates an audio frequency, which may optionally be the same frequency as the center frequency of the narrow band filter 304. The oscillator output 3113 is coupled to the audio signal input 3111 of the voltage controlled attenuator. The resulting output 314 of the voltage controlled attenuator is an independently generated signal having a frequency corresponding to the input “center” frequency, and having its own envelope characteristics. When an input having the same frequency component is detected, the corresponding output signal is generated. Multiple channels of this system are contemplated, as well as alternative emulation models. Control systems are denoted by 320, 322, and 324.
The envelope generator outputs are fed to the control inputs of voltage (for example) controlled amplifiers 412. Each VCA is fed by the audio signal output of a corresponding oscillator 418 having at least an oscillation frequency determined for each oscillator. The VCA audio signal outputs are summed by the output stage 416 and transduced into an output signal Vo (a voltage, digital sequence, etc) and out to the environment. The control elements 406 provide for setting all the control parameters by a user interface (not shown) which may consist of analog or digital input control devices (knob, button, touch screen, digital input, MIDI input, etc).
Icons and displays on the panel can be manipulated by computer keyboard, musical instrument (MIDI interface), keyboard, mouse, touch screen, etc. Icon 602 controls the amplitude of an input audio signal. Icon 604 displays an activity level for each of 8 channels. Icon 608 provides a reference tuning signal for the user/operator, which can be pre-set numerically or by pitch analysis of an audio input signal. Icon 610 allows for the selection of MIDI inputs and output interface for the SDI. Icon 612 allows control of the computer sound card settings. Icon set 613 provides control and display of the gate threshold level, and envelope attack and decay time controls set by the operator. Icon 614 allows for setting of wave-symmetry of triangle and pulse waveforms for the oscillator for all channels. Icon 615 provides mixing the output levels of sine, triangle, and pulse waveforms of the oscillators to the output path.
Element 616 is an input/output frequency scaler, to allow a pre-computed frequency offset or ratio between input frequencies detected and output frequencies synthesized.
Element 618 is a harmonic series generator to enable the preset of input filters and/or output oscillators according to an integer harmonic series for each input frequency chosen.
Element 619 is a device for the setting, saving, and recall of “preset” settings for SDI parameters such as input/output tonalities, etc. In addition to creating the presets, the user can choose to automatically step between sequences of presets, creating the equivalent of “chord progressions” of preset input/output tonalities and other parameters.
Element 620 is an icon for the recall and sequencing functions.
Element 622 is a device for the detection of a “pitch” in the input audio signal, and the insertion of the detected pitch as an input and/or output tonality for each of the channels, or “voices.”
Element 628 is an extended musical staff for the insertion and visualization of the cumulative input and output tonalities.
Element 626 is a device for providing a numerical input of the frequencies for all voices of the input and output tonalities.
Display 624 displays a waveform 630, which indicates the FFT of the input waveform, and is denoted as input spectrum.
Display 632 displays the output spectrum.
Display 605 displays the input tonality of each channel, alongside the spectrum display. Each line indicates the frequency of the input for the corresponding voice. In addition, a display line will also appear whenever an input is detected corresponding to the frequency of any voice channel.
Numerous other features of the embodiment shown will be available in the user manual for the associated device release.
This embodiment was created and runs on a personal computer with either WINDOWS or MAC operating systems, and/or audio input/output card.
The software realization was created in a language “MAX/MSP”, but could equally well be created using other programming languages, tools, operating systems, or computers.
Also, the SDI filter may select non-harmonic combinations of frequencies to further specify particular musical events such as multiple tones, etc. Example: overtone series of F1 frequency=F1, 2F1, 3F1 . . . and also overtone series of f2 frequency=f2, 2f2, 3f2, 4f2, . . . .
All inventive features specified are believed to be enabled by currently available technology accessible to those skilled in the related fields of practice. Although various embodiments and alternatives have been described in detail for purposes of illustration, various further modifications may be made without departing from the scope and spirit of the invention. Accordingly, no limitation on the invention is intended by way of the foregoing description and drawings, except as set forth in the claims appended to this disclosure.
This application claims priority from U.S. Provisional Patent Application Ser. No. 60/835,875 filed Aug. 7, 2006, and which is incorporated by reference herein.
Number | Name | Date | Kind |
---|---|---|---|
4771671 | Hoff, Jr. | Sep 1988 | A |
5557424 | Panizza | Sep 1996 | A |
5565641 | Gruenbaum | Oct 1996 | A |
6054646 | Pal et al. | Apr 2000 | A |
6057498 | Barney | May 2000 | A |
Number | Date | Country | |
---|---|---|---|
60835875 | Aug 2006 | US |