This disclosure relates to applying special audio effects to sounds produced, for example, by musical instruments and, more particularly, to controlling the application of such audio effects.
As a musician or performer plays an instrument during a concert or other type of performance, a song may call for or it may be desirable to apply one or more special audio effects to musical notes produced by the instrument. To apply the effect, audio signals from the instrument are sensed (e.g., with a microphone, pickup, etc.) and sent to a signal processor that may be dedicated to applying such effects to the audio signals. After the one or more audio effects are applied by the signal processor, the processed audio signals are usually conditioned (e.g., amplified, filtered, etc.) and provided to speakers or other type of output device. To initiate the application of the audio effects, the person (playing the instrument) typically steps on a foot-pedal that is located on stage near the person. However, to trigger the application of the audio effects on stage, the musician must first locate the foot-pedal and then step on the pedal in a manner as to not look awkward or out of step with the song being played.
In accordance with an aspect of the disclosure, an audio effects control is configured to include a sensor that senses movement, for example, a change in position, orientation, acceleration or velocity of the sensor. For example, by mounting the sensor to a musical instrument, the movement may be the sensed movement associated with playing a musical instrument. Alternatively, by securing the sensor to the person playing the instrument the sensor will sense movement of part of the person to which the sensor is secured. The sensor produces an electrical signal in response to detecting the movement, or change in position or orientation, and the electrical signal is sent to an audio effects unit to control application of one or more audio effects on audio signals produced by the musical instrument. The sensor can be secured to any other item for which movement or position or orientation of the sensor can be initiated and/or controlled.
The sensor may be configured to sense any one or several phenomena. For example, the sensor may be configured to sense acceleration of the musical instrument (with the aid, for example, of an accelerometer), velocity, or alternatively a position change of the musical instrument (with the aid, for example, of a gyroscope). The position change sensed by the sensor may include any movement, or a prescribed movement such as the musical instrument or a portion of the instrument rotating about an axis or translating along an axis.
Various types of electrical signals may be produced by the sensor. For example, the electrical signal may be an analog signal and may be modulated for transmission from the sensor. An electrical circuit may also be provided for conditioning the electrical signal. The audio effects control also includes an audio effects unit which is responsive to the signal generated by the sensor. The electrical circuit may convert the electrical signal into a digital signal prior to transmission to the audio effects unit. The electrical circuit may also convert the electrical signal into a musical instrument digital interface (MIDI) signal.
In various embodiments, sensing movement may include sensing acceleration of a portion of the musical instrument, sensing acceleration of a portion of a person playing the musical instrument, sensing a rotation of a portion of the musical instrument and/or sensing a rotation of a portion of a person playing the musical instrument, or sensing a translation of a portion of the musical instrument and/or sensing a translation of a portion of a person playing the musical instrument.
Additional advantages and aspects of the present disclosure will become readily apparent to those skilled in the art from the following detailed description, wherein embodiments of the present invention are shown and described, simply by way of illustration of the best mode contemplated for practicing the present invention. As will be described, the present disclosure is capable of other and different embodiments, and its several details are susceptible of modification in various obvious respects, all without departing from the spirit of the present disclosure. Accordingly, the drawings and description are to be regarded as illustrative in nature, and not as limitative.
Referring to
When playing the instrument, a musician may intentionally move guitar 12 in a particular manner such that sensor 10 senses the movement and sends a control signal over cable 14 to audio effects unit 16. Upon receiving the control signal, one or more predefined special audio effects are applied in a controlled manner to the audio signals that are provided over cable 18 from guitar 12. The control signal from sensor 10 may provide various types of control to the application of the audio effects. For example, the control signal may initiate the application of one or more audio effects. By providing this trigger from the control signal, the musician is free to apply an effect from any location rather than e.g., having to seek out and step on a foot-pedal. Other types of audio effect control may be provided by the control signal. For example, rather than providing a discrete trigger signal to initiate (or halt) application of one or more effects, a variable control signal (analog or digital) may be produced by sensor 10. The variable signal may be used to dynamically control various aspects of the audio effects. For example, the variable control signal may be used to adjust the resonant frequency of an audio effect or other similar parameter.
In this illustrative example, after the audio effects are applied, the audio signals are sent over a cable 20 to an amplifier/speaker 22 that broadcasts the signals. As suggested, to halt the application of the audio effects, in some arrangements the musician may intentionally move guitar 12 in another manner such that the movement is detected by the sensor 10. Based on the detected movement, another trigger signal is sent over cable 14 to audio effects unit 16. Upon receiving this second trigger signal, application of the audio effects may be halted or different audio effects may be applied. Alternatively, the audio effects may last a predetermined time period before ending. In another arrangement the audio effects may continue until a cue is provided from the music, e.g., there is a pause or halt in the music, or a particular note is played. In addition, one or more of the audio effects applied to the music can be applied in a fade in and/or fade out fashion.
Referring to
As illustrated in
Referring to
Along with detecting the rotation of guitar 12, other movements may be sensed and initiate generation of an electrical signal by sensor 10. For example, sensor 10 may include a gyroscope or other device for sensing the orientation of the sensor, or the sensor 10 may be capable of sensing translation of the guitar. By incorporating a global positioning system (GPS) receiver in sensor 10, for example, a signal may be produced as the position of the guitar changes as the musician moves. A laser system may also be incorporated into sensor 10 to sense position changes of the guitar relative to one or more reflective surfaces (e.g., a polished floor, wall, ceiling, etc.).
By sensing these rotational, orientation and/or translational changes, the signals produced by sensor 10 may be used by audio effects unit 16 to control the application of one or more audio effects to the musical tones produced by guitar 12. For example, the performer may intentionally move the guitar to apply an audio effect known as a “wah-wah” effect. This type of effect is generated by sweeping the resonant frequency of a filter (which may be included in audio effects unit 16). As guitar 12 changes position, the corresponding signals produced by sensor 10 controls the application of the audio effect. For example, guitar 12 may initially be oriented downward (in the “y” direction) along axis 34 and the signal produced by sensor 10 controls the application of the audio effect at to the low resonant frequency (e.g., 200 Hz) of the filter. As guitar 12 is rotated toward an upward vertical position (oriented in the “+y” direction) along axis 34, the signals produced by sensor 10 controls the application of the audio effect across the frequency spectrum of the filter to an upper resonant frequency (e.g., 4000 Hz). This “wah-wah” effect (or another effect) may also be applied as guitar 12 is rotated about any of the axes (e.g. axis 32, 34, or 36) shown in the figure. Also, sensor 10 may control the application of this effect as guitar 12 is translated (e.g., carried by the performer across a stage), or the orientation of the guitar is changed, or otherwise moved so that the sensor responds.
Along with or in lieu of attaching sensor 10 to the instrument (e.g. guitar 10), one or more sensors may also be attached to the performer playing the instrument. An example is shown in
While sensor 10 is attached to the performer in the illustrated
By attaching sensor 10 to the performer, movement may be better controlled. For example, the performer may trigger a “wah-wah” audio effect by pointing his or her hand toward the ground (along the “−y” direction of axis 34) to apply of the audio effect at the low resonant frequency (e.g., 200 Hz) of a filter. Then, the performer may rotate his or her arm about axis 32 and point their hand toward the ceiling (along the “+y” direction of axis 34). While making this motion, signals produced by sensor 10 may control the application of the audio effect across the frequency spectrum of the filter to the upper resonant frequency (e.g., 4000 Hz). Other types of audio effects may also be controlled based on the motion of the musician's hand.
In the illustrated example of
While this example described attaching sensor 10 to the musician's hand, in other arrangements, the sensor may be attached elsewhere to the musician. For example, sensor 10 may be incorporated into an arm-band or attached to a piece of the musician's clothing or costume. Additionally, multiple sensors may be attached to the musician for producing multiple signals that may be used to control the application of one or more audio effects by audio effects unit 16. By incorporating one or more of these sensors onto the performer or onto the instrument played by the performer, musical performances are improved since the performer is free to move anywhere on stage and trigger the application of audio effects.
A number of implementations have been described. Nevertheless, it will be understood that various modifications may be made. For example, prescribed movements of the sensor are described as producing the control signal for producing the audio effect. It is also possible to have multiple sensors for producing different audio effects. A system can also be provided wherein different prescribed movements of a sensor can produce different audio effects. Further, while audio effect unit 16 is shown as a standalone unit, it may be connected to a computerized system, or alternatively be embodied as a software program run entirely on a computerized system. As such the signals generated by the sensor or sensors would be received by the computerized system and processed by the system before the signals are generated so as to drive one or more loudspeakers, such as speaker 22 in the illustrated embodiment shown in
According to one operational mode as shown, audio effects controller 520 applies currently selected audio effects function 560 to the received signal 505 for amplification by audio amplifier 585 and playback on speaker 590 as an audible signal 592. As described herein, the application of audio effects (e.g., associated with currently selected audio effects function 560) to received audio signal 505 depends on motion associated with motion sensor device 510. For example, as the motion sensor device 510 produces a spectrum of motion signals 511, the audio effects controller 520 applies different audio effects to the audio signal as specified by the motion signal 511 and the currently selected audio effects function 560. Accordingly, a musician wearing the motion sensor device 510 on his hand and playing a corresponding guitar (e.g., audio source 502) that produces the audio signal 505 can apply audio effects to the audio signal produced by the guitar merely by movements associated with the motion sensor device 510.
Note that a current operational mode associated with audio effects controller 520 can be changed based on motion associated with the motion sensor device 510. For example, the audio effects controller 520 can support multiple different types of audio effects functions 525 (e.g., audio effects function 525-1, audio effects function 525-2, audio effects function 525-M) for selective application to the received audio signal 505. Motion monitor function 540 can provide continuous monitoring of motion signal 511 produced by motion sensor device 510. When motion monitor function 540 detects a predetermined type of input associated with the motion sensor device 510, the motion monitor function 540 can initiate a signal to audio effects function selector 530 to select a different type of audio effects function 525 for application to the received audio signal 505. As an example, in one embodiment, a user wearing motion sensor device 510 can knock (one or more times) on a substantially stationary object (e.g., side of a guitar, table, floor etc.) such that motion signal 511 indicates a sudden deceleration associated with the motion sensor device 510. In response to receiving such input (e.g., a motion signal 511 having a magnitude outside of a predetermined operating range or above a threshold value), the motion monitor function 540 provides a command to audio effects function selector 530 to select and download a corresponding audio effects function 525 from repository 580 to currently selected audio effects function 560 for application to the received audio signal 505.
In one embodiment, the number of knocks detected by the motion monitor function 540 indicates which of the audio effects function 525 to currently apply to the received audio signal 505. For example, the motion monitor function 540 can be configured to prompt application of audio effects function 525-1 in response to detecting two knocks, prompt application of audio effects function 525-2 in response to detecting three knocks, and so on.
Note that the audio effects controller 520 can also toggle between a first mode (e.g., which applies audio effects to the received audio signal 505) and a second mode (e.g., which prevents application of audio effects to the received audio signal 505) based on detection of motion signal 511 above a threshold value. Accordingly, a user can knock on an object to terminate application of an audio effects function to the received audio signal and knock again to turn on application of the audio effects function to the received audio signal 505. Embodiments herein therefore include detecting a change in motion associated with the motion sensor device 510; comparing the change in motion to a threshold value; and in response to detecting that the change in motion for a given time interval (e.g., sampling of motion signal 511 for a time duration) is greater than a threshold value, discontinuing application of the a currently selected audio effects to the received audio signal 505. Thus, embodiments herein include detecting that a user wearing the motion sensor device 510 knocks on an object to disable application of the audio effect to the received audio signal 505 as well as detecting that a user wearing the motion sensor device 510 knocks on an object to enable application of the audio effect to the received audio signal 505.
In other words, embodiments herein support detecting that a user wearing the motion sensor device 510 knocks on an object (e.g., the side of a guitar) to switch between a first mode of applying the audio effect (e.g., audio effects function) to the received audio signal and a second mode of terminating application of the audio effect (e.g., audio effects function) to the received audio signal 505.
Accordingly, motion sensor device 510 can produce dual control functionality. For example, one control function (e.g., when the motion sensor device 510 produces a voltage outside of a range or above a threshold value) indicates which of multiple audio effects modes to apply to audio signal 505. Another control function (e.g., when the motion sensor device 510 produces a voltage within a predefined range) indicates which of a corresponding spectrum of audio effects of a currently selected audio effects function 560 to apply to the received audio signal 510.
Application of different audio effects to the received audio signal 505 can include application of such functions as amplification, attenuation, distortion, reverberation, time delaying, up mixing, down mixing of the received audio signal into other frequency bands to modify the received audio signal 505 for playback on speaker 590.
Note further that the motion sensor device 510 can produce a signal for each of multiple axis of motion. In such an embodiment, the motion monitor function 540 can initiate selection of a new mode when either or both of the monitored axis produces a sudden deceleration above a threshold value based on corresponding movement associated with motion sensor device 510. Requiring detector of multiple “knocks” to change an audio effects mode associated with audio effects controller 520 can help prevent inadvertent mode changes when a respective user wearing the motion sensor device 510 accidentally bumps his hand (or other appendage as the case may be) into a stationary object and produces a false “change mode” signal.
In step 610, the audio effects controller 580 receives an audio signal 505 from audio source 502 (e.g., a musical instrument such as a guitar, an audio playback device such as an MP3 player, etc.).
In step 620, the audio effects controller 580 monitors a motion parameter (e.g., acceleration, change in acceleration, velocity, etc.) associated with motion sensor device 510. As previously discussed, in one embodiment, a user wears the motion sensor device 510 while playing a musical instrument such as a guitar.
In step 630, for detected motion of the motion sensor device 510 within a predefined range, the audio effects controller 580 applies a currently selected audio effects function 560 to the received audio signal 505 depending on a magnitude of the detected motion (e.g., monitored motion parameter) monitored by motion monitor function 540.
In step 640, the audio effects controller 580 detects occurrence of a change in movement (e.g., monitored motion parameter such as acceleration) of the motion sensor device 510 outside of the range or that the monitored motion parameter exceeds a threshold value. For example, in one embodiment as previously discussed, the motion monitor function 540 detects a sudden deceleration of motion associated with the motion sensor device 510 (e.g., strapped to a user's hand) as a result of the user repeatedly knocking on a relatively stationary object such as a guitar. Occurrence of one or more knocks by the user can indicate to switch which of multiple audio effects functions 525 to apply to the received audio signal 505.
In one embodiment, occurrence of two knocks by the user can indicate to toggle between a first mode in which the audio effects controller 520 applies an audio effects function to the received audio signal and a second mode in which the audio effects controller 520 does not apply any audio effects functions to the received audio signal 505. Thus, a user can select the first mode (e.g., an ON mode) for modifying (e.g., distorting) the received audio signal 505 (according to a selected audio effects function) for playing on speaker 590. The user can select the second mode (e.g., an OFF mode) for merely playing the received audio signal 505 on speaker 592 without any modification (e.g., without any distortion or application of an audio effects function).
In step 650, in response to detecting the change in movement (e.g., a sudden deceleration as a result of knocking on an object) of the motion sensor device 510 outside of the range or that the motion sensor device 510 experiences a change in deceleration above a threshold value, the audio effects controller 580 discontinues application of the audio effects to the received audio signal 505.
Audio effects controller 520 can be or include a computerized device such as a electronic processing circuitry, a microprocessor, a computer system, a digital signal processor, controller, personal computer, workstation, portable computing device, console, processing device, etc.
As shown, audio effects controller 520 of the present example includes an interconnect 111 that couples a memory system 112 and a processor 113. Interface 531 enables the audio effects controller 520 to receive motion signal 511 (as produced by a motion sensor device 510) and an audio signal 505. As previously discussed, audio effects controller 520 enables a respective user to apply audio effects based on a magnitude of detected motion as generated by the user.
As shown, memory system 112 is encoded with audio effects controller application 520-1 to perform the different functions as described herein. Functionality (such as the audio effects controller application 520-1) associated with the processor 720 can be embodied as software code such as data and/or logic instructions (e.g., code stored in the memory or on another computer readable medium such as a disk) that, when executed, support functionality according to different embodiments described herein.
During operation, processor 113 of electronic circuit 720 accesses memory system 112 via the interconnect 111 in order to launch, run, execute, interpret or otherwise perform the logic instructions of the audio effects controller application 520. Execution of audio effects controller application 520-1 produces processing functionality in audio effects controller process 520-2. In other words, the audio effects controller process 520-2 represents one or more portions of the audio effects controller application 520-1 (or the entire application) performing within or upon the processor 113 in the electronic circuit 720.
It should be noted that, in addition to the audio effects controller process 520-2, embodiments herein include the audio effects controller application 520-1 itself (i.e., the un-executed or non-performing logic instructions and/or data). The audio effects controller application 520-1 can be stored on a computer readable medium such as a floppy disk, hard disk, or optical medium. The audio effects controller application 520-1 can also be stored in a memory type system such as in firmware, read only memory (ROM), or, as in this example, as executable code within the memory system 112 (e.g., within Random Access Memory or RAM).
In addition to these embodiments, it should also be noted that other embodiments herein include the execution of audio effects controller application 520-1 in processor 113 as the audio effects controller process 520-2. Those skilled in the art will understand that the source device 120 can include other processes and/or software and hardware components, such as an operating system that controls allocation and use of hardware resources associated with the source device 120. Also, note that some or all of the embodiments herein can be implemented using hardware alone, or software alone, and/or a combination of hardware and software
Embodiments herein are well suited for use in applications such as those that support application of different audio effects to a received audio signal. However, it should be noted that configurations herein are not limited to such use and thus configurations herein and deviations thereof are well suited for use in other environments as well.
Number | Date | Country | Kind |
---|---|---|---|
PCT/US2006/021952 | Jun 2006 | WO | international |
This application is a continuation in part and claims priority to earlier filed U.S. patent application Ser. No. 11/145,872 entitled “Method of and System for Controlling Audio Effects,”, filed on Jun. 6, 2005, the entire teachings of which are incorporated herein by this reference. This application claims priority to earlier filed PCT patent application Ser. No. PCT/US2006/021952 entitled “Method of and System for Controlling Audio Effects,”, filed on Jun. 6, 2005, the entire teachings of which are incorporated herein by this reference. This application is related to and claims the benefit of earlier filed U.S. Provisional Patent Application Ser. No. 60/776,638 entitled “Method of and System for Controlling Outputs,”, filed on Feb. 24, 2006, the entire teachings of which are incorporated herein by this reference.
Number | Name | Date | Kind |
---|---|---|---|
5147969 | Hiyoshi et al. | Sep 1992 | A |
5449858 | Menning et al. | Sep 1995 | A |
6150947 | Shima | Nov 2000 | A |
6861582 | Street | Mar 2005 | B2 |
6995310 | Knapp et al. | Feb 2006 | B1 |
20030041721 | Nishitani et al. | Mar 2003 | A1 |
20030101863 | Street | Jun 2003 | A1 |
20030196542 | Harrison, Jr. | Oct 2003 | A1 |
20040200338 | Pangrle | Oct 2004 | A1 |
20050109197 | Garrett et al. | May 2005 | A1 |
20060060068 | Hwang et al. | Mar 2006 | A1 |
20060107822 | Bowen | May 2006 | A1 |
20060243123 | Ierymenko | Nov 2006 | A1 |
20060272489 | Remignanti | Dec 2006 | A1 |
Number | Date | Country | |
---|---|---|---|
20070169615 A1 | Jul 2007 | US |
Number | Date | Country | |
---|---|---|---|
60776638 | Feb 2006 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 11145872 | Jun 2005 | US |
Child | 11709953 | US |