A portion of the disclosure of this patent document contains material which is subject to copyright protection. This patent document may show and/or describe matter which is or may become trade dress of the owner. The copyright and trade dress owner has no objection to the facsimile reproduction by anyone of the patent disclosure as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all copyright and trade dress rights whatsoever.
1. Field
This disclosure relates to systems for recording and manipulating music and other audio content.
2. Description of the Related Art
Music creation and performance are activities enjoyed by people in every country of the world. Acoustic instruments have evolved over thousands of years, and their earliest electronic counterparts emerged nearly 100 years ago. The past decade has seen perhaps the most dramatic changes in how people produce music electronically, both individually and in groups. Digital samplers and synthesizers, computer-based recording and sequencing software and advances in new control interfaces have all pushed musical activities forward, with some interesting practices emerging.
One interesting practice is sequenced digital sample composition. Entire songs or backing tracks are now created from pre-recorded digital samples, stitched together in graphical software applications like Apple's Garage Band or Abelton's Live. This composition process usually involves a great degree of initial setup work, including finding samples, composing a piece, and scheduling the samples in the desired sequence. Some software programs allow for live performance and improvisation, using control surfaces with knobs, faders and buttons, or MIDI instruments to trigger the samples and to apply effects. A laptop computer is often brought to concerts to support live performance with these interfaces. A problem that has been often-discussed in electronic music circles is the “laptop musician problem,” which is that the computer-as-musical interface leaves much to be desired from the audience's point of view. A “performer” on stage interacting directly with a laptop computer, focused on the screen and using a mouse and keyboard, is typically not capable of giving an expressive bodily performance. Rather, the audience sees them looking at the screen and hardly moving their bodies, giving few clues as to the connection between their physical actions and the sounds being produced. It has often been cynically observed that these performers may be checking their email rather than actively creating the sounds coming from their computers.
A second practice that has enjoyed great popularity in recent years is the phenomenon of music-based video games. Guitar Hero and its sequel have been perhaps the most successful musical video games to date, but there are a number of other examples. The important characteristics of these games for the present discussion is that they use game-oriented controllers. Some games, like Guitar Hero, use special controllers made expressly for the purposes of the game. However, these games may not allow for music creation ad manipulation. Rather, they tend to enable musical “script-following,” in which gamers must press buttons in rhythm with pre-composed music or sing along with a pre-created song (i.e. karaoke). Games that allow for sequencing of samples do not permit on-the-fly recording of new samples by the musician, or continuous effects such as pitch-shifting and scrubbing.
Description of Apparatus
Referring now to
The interactive audio recording and manipulation system 100 may include additional controllers, such as controller 175, to allow two or more musicians to compose and/or perform as an ensemble. Two or more controllers 170/175 may be coupled to a common computing device 110, as shown in
The computing device 110 may be any device with a processor 120, memory 130 and a storage device 140 that may execute instructions including, but not limited to, personal computers, server computers, computing tablets, set top boxes, video game systems, personal video recorders, telephones, personal digital assistants (PDAs), portable computers, and laptop computers. The computing device 110 may have a wired or wireless interface to the controller 170. The computing device may be physically separate from the controller 170, or may be integrated with or within the controller 170. The coupling between the computing device 110 and the controller 170 may be wired, wireless, or a combination of wired and wireless. The computing device 110 may include software, hardware and firmware for providing the functionality and features described here.
The computing device 110 may have at least one interface 125 to couple to a network or to external devices. The interface 125 may be wired, wireless, or a combination thereof. The interface 125 may couple to a network which may be the Internet, a local area network, a wide area network, or any other network including a network comprising one or more additional interactive audio recording and manipulation systems. The interface 125 may couple to an external device which may be a printer an external storage device, or one or more additional interactive audio recording and manipulation systems.
The computing device 110 may include an audio interface unit 150. The audio interface unit 150 may have at least one audio input port 152 to accept input audio signals from external sources, such as microphone 160 and electronic instrument 165, and at least one audio output port 154 to provide output audio signals to one or more audio output devices such a speaker 180. The audio interface unit 150 may have a plurality of audio output ports to provide audio signals to a plurality of audio output devices which may include multiple speakers and/or headphones. The audio input and output ports may be wired to the audio sources and audio output devices. The audio input and output ports may be wireless, and may receive and transmit audio signals using a wireless infrared or RF communication protocol, which may include Bluetooth, Wi-Fi, or another wireless communication protocol.
The computing device 110 and the audio interface unit 150 may include one or more of: logic arrays, memories, analog circuits, digital circuits, software, firmware, and processors such as microprocessors, field programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), programmable logic devices (PLDs) and programmable logic arrays (PLAs). The computing device 110 may run an operating system, including, for example, variations of the Linux, Unix, MS-DOS, Microsoft Windows, Palm OS, Solaris, Symbian, and Apple Mac OS X operating systems. The processes, functionality and features may be embodied in whole or in part in software which operates on the computing device and may be in the form of firmware, an application program, an applet (e.g., a Java applet), a browser plug-in, a COM object, a dynamic linked library (DLL), a script, one or more subroutines, or an operating system component or service. The hardware and software and their functions may be distributed such that some components are performed by the computing device 110 and other components are performed by the controller 170 or by other devices.
The storage device 140 may be any device that allows for reading and/or writing to a storage medium. Storage devices include, hard disk drives, DVD drives, flash memory devices, and others. The storage device 140 may include a storage media to store instructions that, when executed, cause the computing device to perform the processes and functions described herein. These storage media include, for example, magnetic media such as hard disks, floppy disks and tape; optical media such as compact disks (CD-ROM and CD-RW) and digital versatile disks (DVD and DVD±RW); flash memory cards; and other storage media.
The controller 170 may be any controller, such as a game controller, having a plurality of function buttons 174 and at least one continuous control 172, which may be a joystick, a thumb stick, a rotary knob, or other continuous control. The continuous control 172 may have two continuous control axis, as shown in
The controller 170 may be a single hand-held unit, as illustrated in
The left hand grip 210 includes a direction-pad or D-pad 240, also called a “hat switch”, that can be moved in four directions and is essentially equivalent to four function buttons. The D-pad 240 may be used to control the playback volume (VOL+/VOL−) and to control an audio effect for either recording (EFFECT REC) or playback (EFFECT PLAY). The left hand grip 210 includes three additional function buttons 250 which may be used to control REC (record), LOOP, and STOP functions that will be described in greater detail during the discussion of processes. The left hand grip 210 also includes a trigger (not visible) operated by the left index finger. The left trigger may be used to enable a pitch-shifting effect that will be described subsequently.
The right hand grip 220 includes four additional function buttons 260 which may be used to control the recording and playback of four recording tracks (A-D) as will be described in greater detail during the discussion of processes. The right hand grip 220 also includes a trigger (not visible) operated by the right index finger. The right trigger may be used to enable a scrubbing effect that will be described subsequently.
The Microsoft Sidewinder Dual-Strike game controller 200 shown in
The interactive audio recording and manipulation system 100 may be playable without requiring the use of a display screen. The interactive audio recording and manipulation system 100 may be controlled exclusively through the controller 170, a property that sets the interactive audio recording and manipulation system apart from most laptop-based music-making systems. The use of the controller 170 may allow a musician's attention to be focused on giving a compelling performance, and/or interacting with other musicians. Since the musician's attention is not focused on a display screen, the musician can more easily focus on their surroundings and the musical activity, making for a more engaging, more sociable music-making experience.
Description of Processes
The recorded tracks other than the master loop track (i.e. tracks B, C, and D in the example of
Each track set for looping, such as tracks B and C in the example of
A variety of techniques may be used to implement the triggers associated with the looping secondary tracks. Each trigger may be implemented as a tag attached to the master loop that initiates the playback of the associated secondary track as the master loop track is played. Each secondary track may have an associated trigger table that stores the time at which each trigger is to occur, and the playback of the secondary track may be initiated whenever the loop counter is equal to a time stored in the trigger table. The triggers for all of the secondary tracks may be stored in a common trigger table. The triggers and the master loop counter may be implemented as a set of linked data structures, or in some other manner.
At any given time, some tracks may be looping and other tracks may not be looping. The looping tracks, including the master loop track if defined, may be in stable state 425. One or more non-looping tracks may be in stable state 435, or may not be recorded.
The transition between the blocks of the process 400 may be controlled by the collective action of Record, Loop, Stop, and track function buttons which may be disposed on a controller such as game controller 200. These function buttons may be employed to record and manipulate music and other audio content as shown in brackets adjacent to the dashed transitions in
The master loop length may be defined by simultaneously activating the Record and Loop function buttons, in which case the loop length may be set to equal the duration for which both buttons were activated (565). The master loop length may be also be defined by activating the Record and Loop function buttons and a track button, in which case the loop length may be set to equal the duration for which all buttons were activated and a master track having the same length as the loop length may be recorded (555) and set into a looping state (560).
In situations where a plurality of musicians play a system for interactive audio recording an dmanipulation using a respective plurality of controllers, the master loop timer and the master loop length may be synchronized or common for the plurality of musicians. The master loop time and master loop length may be synchronizable with an external device, such as another musician (575), who may be playing a separate interactive audio recording and manipulation system. The master loop timer may be synchronized with the second musician such that the two musicians perform or record using the same master loop length. The master loop timer may be synchronized by activating the loop function button for more that a preset time period, such as one second, in which case the master loop length and current time within the master loop cycle may be loaded from the second musician or from the second interactive audio recording and manipulation system. Alternatively, two or more musicians or two or more interactive audio recording and manipulation systems may be coupled such that changing the master loop length by any musician sends a signal 570 to all other systems to synchronously change the master loop length for all musicians.
Pressing the D-switch to the “Effect Record” position (left as shown in
In the example of
In the example of
In a typical musical session with an interactive audio recording and manipulation system such as the system 100, a musician may begin by recording a percussive track or bassline, which will act as the master loop and as the “backing track” supporting the subsequent musical layering. Next, a vocal melody track may be recorded over the backing track. A harmony track to match the melody track may be recorded next. Short percussive sounds may be recorded, then sequenced at any number of desired offsets into the loop. All of these recording and layering activities utilize the buttons of the gamepad in various combinations. Finally, when this multi-layered musical creation is constructed, the musician may sing over it—sculpting their voice with pitch-shifting or reverberation. Individual samples that have been recorded may be “scratched” the way a DJ scratches a record. All of these continuous manipulations of the sound utilize the continuous degrees-of-freedom of the two-axis analog control, sometimes in conjunction with button-presses.
Closing Comments
Throughout this description, the embodiments and examples shown should be considered as exemplars, rather than limitations on the apparatus and procedures disclosed or claimed. Although many of the examples presented herein involve specific combinations of method acts or system elements, it should be understood that those acts and those elements may be combined in other ways to accomplish the same objectives. With regard to flowcharts, additional and fewer steps may be taken, and the steps as shown may be combined or further refined to achieve the methods described herein. Acts, elements and features discussed only in connection with one embodiment are not intended to be excluded from a similar role in other embodiments.
For means-plus-function limitations recited in the claims, the means are not intended to be limited to the means disclosed herein for performing the recited function, but are intended to cover in scope any means, known now or later developed, for performing the recited function.
As used herein, “plurality” means two or more.
As used herein, a “set” of items may include one or more of such items.
As used herein, whether in the written description or the claims, the terms “comprising”, “including”, “carrying”, “having”, “containing”, “involving”, and the like are to be understood to be open-ended, i.e., to mean including but not limited to. Only the transitional phrases “consisting of” and “consisting essentially of”, respectively, are closed or semi-closed transitional phrases with respect to claims.
Use of ordinal terms such as “first”, “second”, “third”, etc., in the claims to modify a claim element does not by itself connote any priority, precedence, or order of one claim element over another or the temporal order in which acts of a method are performed, but are used merely as labels to distinguish one claim element having a certain name from another element having a same name (but for use of the ordinal term) to distinguish the claim elements.
As used herein, “and/or” means that the listed items are alternatives, but the alternatives also include any combination of the listed items.
This patent claims priority under 35 USC 119(e) from Provisional Patent Application Ser. No. 60/878,772, entitled INTERACTIVE AUDIO RECORDING AND MANIPULATION SYSTEM, filed Jan. 5, 2007.
Number | Date | Country | |
---|---|---|---|
60878772 | Jan 2007 | US |