This application claims the benefit of PPA U.S. 60/777,373, filed 27 Feb. 2006 with the USPTO by the present inventors.
Not Applicable.
Not Applicable.
1. Field of the Invention
The present invention relates to audio processors and synthesizers, specifically to the methods and systems by which users may control and/or configure them.
2. Discussion of Prior Art
Historically, these devices contained analog circuits. Users controlled these circuits by turning knobs connected to potentiometers, pushing keyboard keys connected to switches, etc., thus altering the circuit itself and therefore the sound it produced. If reconfiguration was possible, it was done by physically moving or connecting wires between circuit elements, usually via physical switches or patch cords, though more recently through electronic switching.
Currently, most such devices contain digital processors (CPUs, DSPs, or custom digital logic), which create, modify, and/or play back audio via digital algorithms. (Occasionally the digital processors control analog circuits, but the application to this invention remains the same.) There are thousands of such devices today, far too many to individually enumerate here, and known collectively as “stompboxes”, “multi-effect units”, “synthesizers”, and so on.
Users can control and/or configure these devices in several different ways, depending on the device in question. First, almost every such device has some combination of buttons, knobs, lights, and displays accessible to the user. However, their number, utility, and ability to present information is limited by the typically small size of such devices, and by the cost of including them in each unit sold.
It is, therefore, advantageous to provide some way to control and/or configure the device by connecting it to a general purpose computer, such as a desktop, notebook, or handheld (Palm, Handspring, Treo, etc.) This allows the user to use the display and interface devices of the computer, such as mice, keyboards, touchscreens, trackpads, and high-resolution displays, which are usually far superior to those built into the device. Here are the typical methods and systems of doing so, as of this writing:
1) Many such devices have MIDI ports, which allow the user to create digital data elsewhere (for example, with a digital keyboard or a computer) and send it to the device. MIDI is a one-way protocol and is very slow (31.25 Kbits/second—slower than modern modems), making it ill-suited to interactive communication with a computer.
Despite these issues, this method has been used occasionally in the past. However, since MIDI is a one-way protocol, the user must first find and install special software on his computer in order to do so.
2) Recently, some such devices have incorporated USB ports, which allow the user to connect the device to a computer. Typically this is only used to transfer digital audio files back and forth, not for control or configuration of the device; the device appears as a generic mass storage device to the computer, is treated as part of the computer's filesystem, and files are transferred through the standard filesystem interface.
Some devices allow the user to interactively control and/or configure the device from a computer via USB. However, in every case we are aware of, the user must first find and install special software on his computer in order to do so. (Also, in every case we are aware of, this software is specific to the computer and operating system.)
The best example we know of is the Nord Modular (made by Clavia), an audio processor and synthesizer. The Nord can be controlled and configured via graphical interface software running on a general-purpose computer, but only when directly connected to a computer through a USB port, and only after the user manually installs the program NMG2Editor (which only runs under Windows and Macintosh operating systems.)
3) Most recently, a very few such devices have incorporated Ethernet ports. This allows the user to connect the device to a standard computer network. Since such devices are most relevant to this invention, we will discuss each such device known to us at this time and its network functionality in order to demonstrate the uniqueness of this invention.
The first example is the Muse Receptor, a rack-mountable audio processor and synthesizer (“rack unit”). The Receptor can be controlled and configured in real-time via graphical interface software running on a general-purpose computer anywhere on the same network, and its internal software can be updated via the network. However, to control and configure the Receptor, the user must first install the program ReceptorRemote on each computer they wish to use in this way. (And ReceptorRemote runs only under Windows and Macintosh OSX operating systems.)
The second example is the Looperlative LPI, a single-purpose rack-mountable audio processor (“rack effect”) designed to loop audio during real-time performances. Connecting it to a network allows it to automatically download updates to its internal software, and allows the user to upload and download raw audio files. However, it cannot be controlled or configured via the network, and the interface is both non-interactive and entirely text-based, having less functionality than the raw filesystem interface provided by all modern computers and operating systems (Windows, Mac OSX, Linux, etc.)
The third example is the Manifold Labs Plugzilla, a rack unit essentially similar to the Muse Receptor, though the remote control and configuration options are apparently limited to adding and removing plugins. Configuration requires the user to have previously installed a Windows application called PZView.
There are also many software programs that use a general-purpose computer's on processing power, or dedicated DSPs connected directly to the computer, to process and/or synthesize audio. Most such programs have graphical interfaces, and all are known to those skilled in the art of computer-based electronic music. The oldest and best-known is the program MAX and its various incarnations, beginning as Patcher in 1986 on the Macintosh, becoming MAX/FTS in 1989, and subsequently MAX/MSP and Pd. It allows control and configuration of real-time MIDI and audio data processing via a graphical drag-and-drop interface, and predates most patents on such systems, such as U.S. Pat. No. 6,981,208 (Milne et. al., 2005). Other such programs include Reaktor, SuperCollider, and the various graphical interfaces to Csound.
However, Milne et. al. specifically claim and describe their graphical interface as running locally, on the same computer system as the audio engine. As of this writing, Reaktor's graphic interface is part of the program and cannot be run remotely. SuperCollider's and Pd's graphical interfaces can be run remotely, but like the devices and inventions previously described, the user must first install the graphical interface software on any computer he wishes to use in this way.
In conclusion, there are many electronic audio processors and synthesizers that can be controlled and/or configured remotely from a general-purpose computer, often by a drag-and-drop or other graphical interface. There are also many software programs that give this functionality to a general-purpose computer.
However, we are not aware of any such device, or any patent describing any such device, that allows the user to control and/or configure the device from a general-purpose computer without previously having to find and install special control and/or configuration software on that computer. This disadvantage has several consequences, including the following:
The software installation process takes time, is inconvenient, and must be repeated every time the user wishes to use a different computer to control and/or configure the device.
The version of the software on the computer may be incompatible with the version of the device the user is attempting to control and/or configure. Attempting to maintain compatibility and detect non-compatibility across multiple potential combinations of software versions is a major problem for both developers and users of the software and device.
When updates are desired or required, the user must update both the device and the computer software.
The user may not have access to the installation media or the Internet from the computer in question, leaving him unable to install the software and control and/or configure the device.
Particularly on multi-user systems, the user may not have permission to install the software on the computer available to him, leaving him unable to control and/or configure the device.
The objects and advantages of previous inventions as described above are to allow the user to control and/or configure a device incorporating an embodiment of the invention from a general-purpose computer. This allows the user to use the display and interface devices of the computer, which are generally much more capable and easier to use than the few knobs, buttons, and small displays that can fit into the form factors typical for audio processors and/or synthesizers.
Since devices incorporating an embodiment of these inventions do not need to integrate a graphic display or other complex visual interface, nor must they integrate large input devices like keyboards or mice (or awkward substitutes for such), such devices can be manufactured at a smaller size and at lower cost. Additionally, such devices are typically less physically fragile, and use less power, than those with displays or large input devices.
The objects and advantages of the present invention over previous inventions as described above are to allow the user to control and/or configure electronic audio processors and/or synthesizers from a general-purpose computer without first having to find and install special control and configuration software on each computer he wishes to use in this way.
Additional consequences of this advantage are:
The version of the software is never incompatible with the version of the device the user is attempting to control and/or configure. As previously described, attempting to maintain compatibility and detect non-compatibility across multiple potential combinations of software versions is a major problem for both developers and users of the software and device.
When updates are desired or required, only the device must be updated: the software is updated as part of the device update.
The software is always available to the user, even if the user doesn't have access to the installation media, the Internet, or a computer with the software already installed.
In preferred embodiments of the invention, the user does not need permission to install software on, or run software from, a locally accessible filesystem.
In preferred embodiments of the invention, the user does not need to own a specific type of computer hardware or run a specific operating system in order to use the invention (although platform-specific software is still possible within the scope of the invention.)
The invention, a method and system of controlling and/or configuring an electronic audio processor and/or synthesizer, comprises within the memory of such a device, or within memory or other data storage attached to or integrated with the device, the application program(s) and associated data (collectively known as “software”) required for the user to control and/or configure the device itself from a general-purpose computing device, as well as a network or bi-directional data port that allows it to be connected to a general-purpose computer or computer network.
To use the method and system of the invention, the user connects the device to a computer or computer network, establishes a connection from a computer on the network to the device, and requests interaction with the device. The device transfers the software to the computer. The software runs on the computer. The user then interacts with the software, the software communicates with the device, and the device controls and/or configures itself as per the communication. The user repeats the interaction and these steps (interact, communication, control and/or configuration) until he is satisfied with the results, and the device continues to function as controlled and/or configured.
Some definitions, as used in this document:
An “audio input” is a means by which sound waves, or digital or analog representations of sound waves, may be introduced into a device. This means may be dedicated specifically to the task of gathering audio, i.e. a microphone or a ¼″ audio jack carrying an analog audio signal, or shared, i.e. a USB or Ethernet connection carrying digital audio data.
An “audio output” is a means by which sound waves, or digital or analog representations of sound waves, may be produced by a device. This means may be dedicated specifically to the task of producing audio, i.e. a speaker or a ¼″ audio jack carrying an analog audio signal, or shared, i.e. a USB or Ethernet connection carrying digital audio data.
An “audio processor” or “audio processing device”, frequently known as an “effects processor”, takes one or more audio inputs, modifies the audio in some way, and sends it to one or more audio outputs. Examples of such modifications, which may be combined, include delay, waveshaping, equalization, and modulation of these modifications by internally or externally generated waveforms, producing results known commonly as “flanging”, “distortion”, “reverberation”, etc. This modification may be performed directly by a digital processor, or indirectly in part or full by analog circuits controlled by the digital processor.
An “audio synthesizer”, “audio synthesizing device”, or “synthesizer” creates audio (this can include playback on demand of previously stored audio, synthetic generation of audio waveforms, and/or combinations of both) and sends it to one or more audio outputs. This audio is typically generated according to user manipulation of the device's controls, or an input data stream representing manipulation of such controls. This generation may be performed directly by a digital processor, or indirectly in part or full by analog circuits controlled by the digital processor.
An “audio player” or “audio playing device” is a special case of an audio synthesizer, which stores previously created representations of audio either within itself or on removable media connected to it, and sends the audio on demand to one or more audio outputs. It may modify the stored audio on output. This playback may be performed directly by a digital processor, or indirectly in part or full by analog circuits controlled by the digital processor.
An “audio recorder” or “audio recording device” is a special case of an audio player, with the additional ability to record and store incoming audio in real-time.
A “portable audio player” or “portable audio recorder” is a special case of an audio player or recorder, which can run from an internal power source and is easy to carry along in the course of most normal daily activities. These devices are often colloquially known as “MP3 players”, even though MP3 is only one of the audio data formats they can interpret.
(Note: Many modern devices popularly known as “audio players” or “MP3 players” also have the ability to record audio, so the semantic line between “player” and “recorder” is somewhat blurred in everyday usage. Many modern devices popularly known as “synthesizers” also have the ability to process audio, giving them some functions of “effects processors”, and vice versa. In general, for modern devices in which audio playback, processing, and synthesizing is entirely or substantially performed by digital processors executing digital algorithms, it is almost always possible for the same device to record, process, synthesize, and play audio. The distinction, therefore, is usually one of software and frequently one of primary intended function, not of capability of the physical circuits comprising the device.)
A “stompbox” is a special case of an audio processor, which is designed to be placed in the audio signal chain between an electric musical instrument, such as a guitar, and an amplification device for such an instrument, such as a guitar amplifier. Its enclosure rests on the floor in typical use, and it generally comprises at least one control which the user can operate with a shod foot without damaging the device—usually a switch that bypasses its processing when turned off.
By “device”, we mean “audio processor and/or synthesizer” unless stated otherwise.
By “controlling and/or configuring”, we mean the act of changing, rearranging, substituting, loading, and/or saving audio processing, synthesis, recording, and/or playback algorithms, parameters to said algorithms (including audio data), signal routing between said algorithms, and/or properties of audio inputs, audio outputs, physical controls, displays, and/or other features of such a processor and/or synthesizer.
By “computer”, we mean any general-purpose computing device that can be connected to a network. At this writing, this typically means a desktop, notebook, or PDA.
By “computer network”, we mean any means by which a computer call send and receive data from other computers or (generic, not just audio) devices on the network. At this writing, this typically means peer-to-peer networks such as Ethernet, 802.1x, and other Internet networking technologies, although non-peer-to-peer connections such as USB, Firewire, Bluetooth, and generic serial port connections are also within the scope of this definition and invention. Please note that it only requires two devices to make a network: for instance, a computer connected to an audio synthesizer through a USB port is a network with two nodes.
By “network connection”, we mean any connection of the type described in the previous definition of “computer network”.
(The preceding two definitions allow us to avoid the cumbersome “computer and/or computer network” and “network and/or data connection” circumlocutions.)
By “software”, we mean any combination of program(s), subroutines, code fragments, and data associated with them. The data may be embedded in the program or stored separately.
By “client software”, we mean software that requests and receives data and/or services from another system known as the “server” and running “server software”, the server usually, but not necessarily, located on another computer or device. (In a strict definition of “client” and “server”, the server cannot provide any services or data without an explicit request from the client: however, as is common to those skilled in the art, we use these terms less strictly, and the server is allowed to push data or provide services to the client without an explicit request. Otherwise we are forced into circumlocutions such as “peer-to-peer software whose primary role is as a server to peer-to-peer software whose primary role is as a client”—which are themselves misleading, because true peer to peer software must be able to both request and provide services to and from any other instance.)
Both “memory” and “storage” refer to data storage accessible by a digital processor, and usage of one or the other is primarily a matter of custom rather than definition. “Memory” can mean both volatile and non-volatile data storage, usually internal to a computing device. By “storage”, we usually mean external non-volatile data storage.
For clarity, we assume at the start of the flowchart in
In 102, the user connects the network port 312, 460 of the stompbox 311, 400 to an open port on the computer network 310. In 104, the user establishes an HTTP connection between the computer and the stompbox, by typing the IP address of the stompbox into the address bar of a web browser or by calling up a previously saved bookmark. This also serves as a request for interaction with the stompbox 106.
In 108, the HTTP request from 104 also causes the stompbox to transfer software (in this embodiment, a Java application called AGE and its associated data) to the computer, and causes the computer to run AGE 110.
(Please take special note of 106, 108, and 110, as they embody the major improvements and inventive steps of our invention. Previous inventions simply assume that the software already exists on the computer, ignoring the problems of how and when the software got there, whether the software is a correct or compatible version, and other problems previously enumerated.)
In 112, the user interacts with AGE, using the display 302, keyboard 304, and mouse 308 attached to the computer 306.
In 114, AGE communicates the results of the user's interaction to the stompbox. In 116, the stompbox controls and/or configures itself as per the communication. Note that at any time during this process, the stompbox may communicate results of this communication, or any other data, to AGE (not shown in flowchart because it can happen at any stage). Examples of such communications include actual vs. requested state, audio data at a specified stage of processing, state of physical controls on the stompbox 402, 404, 406, dynamically loaded application programs to control and/or configure other aspects of the device or other similar devices, input sensitivity and calibration, network configuration, and so on.
In 118, the user evaluates the results of his interaction, usually by playing the electric musical instrument 336 and listening to the resulting audio output through the amplifier 332 and speakers 334. If the results are not yet satisfactory, the user returns to 112 and continues interacting with AGE.
If the results are satisfactory, the stompbox continues to function as currently controlled and configured 120, even if the user closes his browser or disconnects the stompbox from the network.
Our invention has been described in terms of its preferred embodiments, but is not limited to them. The description is not intended to be exhaustive, to limit the invention to the exact forms disclosed, or to enumerate every possible function of the forms described. The embodiments have been chosen to clearly illustrate the principles of the invention and their practical application, so that those skilled in the art can understand, modify, improve, and combine features of the invention or its embodiments, and apply them to other embodiments not specifically described herein.
For clarity and brevity, we use words as defined at the beginning of the Detailed Description in the claims that follow, unless otherwise indicated. For instance, words such as “software” and “audio input” have a specific definition as applied to this invention, and are used in that sense unless modified.
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/US2007/005023 | 2/27/2007 | WO | 00 | 8/27/2008 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2007/100798 | 9/7/2007 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
20020002541 | Williams | Jan 2002 | A1 |
20020082730 | Capps et al. | Jun 2002 | A1 |
20030018755 | Masterson et al. | Jan 2003 | A1 |
20050278760 | Dewar et al. | Dec 2005 | A1 |
Number | Date | Country | |
---|---|---|---|
20090055007 A1 | Feb 2009 | US |