Preferred embodiments of the present invention are included in the music authoring program called “Muse 1.0,” manufactured by Eolas Technologies Inc. The source code for Muse 1.0 is provided with this application in the source code Appendix. The Appendix should be consulted for details about a preferred embodiment of the invention. The Muse 1.0 program uses Tcl/Tk code and Tcl Starkit technology.
Additionally, a hardcopy appendix has been included that describes the application programming interface (API) for the Muse 1.0 product.
The present invention is presented below and is discussed in connection with a preferred embodiment and with the Figures. First, an overview of the invention and a preferred embodiment are presented. Next, features of the Muse 1.0 program are discussed. Finally, the standard hardware appropriate for use with the present invention is described.
In
Once the user finalizes the visual presentation of sound, the user can simply view the presentation or save the presentation in a musical artwork file to facilitate playback at a later time, to link the musical artwork file to an object in the presentation, to share compositions with other users, etc. A user can also save the audio playback to a file, use the resulting audio and/or image data as input to other programs or functions, import items such as predefined objects, images or video, that can act as objects or paths within the presentation or can import other presentation characteristics; play music directly from a PC keyboard, and perform other functions.
A first step in creating music or any other sound composition by using electronically generated visual images is to display one or more graphical objects on the canvas of a display screen using drawing tools. Alternatively, a user can display graphical objects on the display screen canvas by importing existing graphical objects. Graphical objects in the present invention can include any electronic visual image including photographic images, graphical images, video images, etc. that are either drawn directly onto the display screen canvas or imported and displayed on display screen canvas.
In a preferred embodiment of the present invention, a graphical object can have a tone characteristic and a color characteristic. As shown in
To modify the tone or color characteristic of a graphical object, the user can select the “Fill” button at the bottom of the dialog box; select a new tone and color from the onscreen keyboard using the steps discussed above, and left-click the mouse on the graphical object to change the tone and color of the graphical object.
It is important to note that in the present invention the characteristics of a graphical object can be any visual or audio characteristic or none at all. For example, as shown in
It is also important to note that the present invention does not limit a tone to a musical note or a musical chord. Rather, a tone can include any audible sound such as a bell, a siren, a voice, etc. Finally, in the present invention, the user's method for the selection of tones and colors etc. is not limited to an onscreen keyboard, but can be accomplished using PC keyboards, touch pads, data files, etc.
As shown in
Next, as shown in
After a graphical object is displayed on the display screen canvas, the user can move the graphical object to a new position on the canvas by selecting the “Move” button at the bottom of the dialog box 3 of
In a preferred embodiment, one or more graphical objects can be drawn on the display screen canvas. A new graphical object can be drawn each time using the steps outlined above. Alternatively, the user can duplicate a graphical object that has already been displayed, by selecting the “Clone” button on the dialog box toolbar, left-clicking the mouse on the graphical object and holding the left mouse button down while using the mouse to drag a copy of the graphical object to a desired position on the display screen canvas.
Multiple graphical objects can also be grouped together as a single unit and duplicated or moved by selecting the “Group” button on the Muse 1.0 dialog box toolbar (the “Group” button changes to an “End Grp” button), using the mouse to place a band around the graphical objects being grouped and then selecting either the “Clone” button to duplicate the grouped graphical objects or the “Move” button to move the grouped graphical objects. Once the user is finished moving or duplicating the grouped objects, the user can ungroup the objects by selecting the “End Grp” button.
To delete a graphical object, the user right-clicks the mouse on the graphical object which causes a popup menu to appear. The user selects the “Delete” option from the popup menu. After selecting the “Delete” option, a confirmation popup dialog box appears. Selecting the “Yes” button from the confirmation popup dialog box deletes the graphical object and selecting the “No” button cancels the deletion request.
A next step in creating sound by using electronically generated visual images is to move a tracking object within the display screen canvas so that when the tracking object is in a predetermined relationship with a graphical object a tone sounds. The present invention is not limited to a particular predetermined relationship between the graphical object and the tracking object. For example, the tone can be sounded when the tracking object is in proximity to the graphical object, within the graphical object, at a point of impact with the graphical object or when impact with the graphical object is ended, at an entry boundary of the graphical object or at an exit boundary of the graphical object, etc. Additionally, the criterion for triggering the sounding of the tone can change dynamically over time (during execution) or can be different for different graphical objects, etc. In a preferred embodiment, the graphical object is highlighted when the tone sounds.
In a preferred embodiment of the invention, there a two categories of tracking objects, default tracking objects and user-defined tracking objects. In both categories, the tracking object moves along a path on the display screen canvas and a tone is sounded when the tracking object is in a predetermined relationship with a graphical object.
Each of the default tracking objects moves across the display screen canvas along a path in a default direction and speed. The values for direction, speed and other object properties can be programmed by the software manufacturer. The user can be allowed to change the default settings. One way to allow user specification of object properties is via menu or control selections. Another approach is to provide an editable properties file that is read by the program upon startup, creation of a new canvas, or some other event.
The user can disable the default tracking objects by either selecting the “Tools” button from the dialog box toolbar and selecting the “Hide Default Trackers” option from the drop-down menu or by pressing the “F2” function key on a PC keyboard. To enable the default tracking objects the user can select the “Tools” button from the dialog box toolbar and select the “Show Default Trackers” option from the drop-down menu or press the “F2” function key again on a PC keyboard. When the presentation is saved by steps discussed in detail below, the state of the default tracking objects (either enabled or disabled) is also saved.
In a preferred embodiment, the Muse 1.0 system provides two features that allow a user to create a user-defined tracking object and to set the movement of the tracking object on the display screen canvas. The two features are herein further referenced as the “path feature” and the “point feature.” Both features allow a user to set the movement of a tracking object along a path on the display screen canvas based on direction, speed, etc.
The path feature, allows the user to create a tracking object path by first selecting the “Path” button from bottom of the Muse 1.0 dialog box of
In a preferred embodiment, the user can make the user-defined path visible while it is being drawn or after it has been drawn, by first selecting the “Tools” button from the Muse 1.0 dialog box toolbar and then selecting the “Show Paths” menu option or by pressing the “F1” function key on a PC keyboard. To hide the path, the user can select the “Tools” button from the dialog box toolbar and then select the “Hide Paths” menu option or the user can press the “F1” function key on the PC keyboard again.
Using the path feature, the speed of a tracking object is initially set according to the speed at which the mouse is dragged along the display screen canvas when the path is being drawn. In other words, the faster the mouse is dragged while drawing the path, the faster the tracking object will move along the path during playback. So to produce, for example, a rapid or repeating pattern of sound, the path can be drawn with quick or short movements of the mouse. To produce a varied pattern of sound, the path can be drawn with varied quick or short mouse movements in combination with slow or long mouse movements.
The speed of individual tracking objects can also be set by right mouse clicking on the tracking object itself or by right mouse clicking on the path of the tracking object. Right clicking on the tracking object or tracking object path causes a popup menu to display. By selecting the “Tracker Speed” popup menu option, a list of tracking object speeds is displayed for the user to select from. To make right-clicking the tracking object easier, the user can stop the movement of the tracking object by selecting the “Pause” button on the Muse 1.0 dialog box toolbar before right-clicking the mouse on the tracking object. To make selecting the tracking object path easier, the user can first make the path visible by selecting the “Tools” button on the Muse 1.0 dialog box toolbar and then selecting the “Show Paths” menu option or by pressing the “° F. 1” function key on a PC keyboard.
The point feature, allows the user to define the path of a tracking object by selecting the “Point” button on the dialog box toolbar of
To set the speed at which the tracking object 41 moves along the path 45 using the point feature, the user can left-click the mouse several times at a particular cursor position before selecting a new cursor position which causes the tracking object 41 to pause at that position. Similar to the tracking objects created using the path feature, the speed of a tracking object can also be set from a list tracking object speeds by right mouse clicking on the tracking object itself or by right mouse clicking on the path of the tracking object.
In view of the discussion above, it should be apparent that the Muse 1.0 interface provides many controls and features that allow a user to easily author a visual presentation of sound. Further, as discussed below, the Muse 1.0 interface provides additional controls and features that allow a user to create more sophisticated visual sound presentations.
In
As shown in
Additionally, as shown in
As shown in
Another feature of a preferred embodiment allows a user to treat a mouse cursor like a tracking object so that when the mouse is moved on the display screen canvas and is in a predetermined relationship with a graphical object, a tone will sound as if the mouse cursor were a moving tracking object.
A further feature of a preferred embodiment allows a user to move graphical objects and tracking objects from the foreground of display screen canvas to the background of the display screen canvas and vice-versa. To move an object from the foreground to the background, the user can right-click on the object, this causes a popup menu to display, and select the “Raise” option. To move an object from the background to the foreground, the user right-clicks on the object and selects “Lower” from the popup menu.
A further feature of a preferred embodiment allows a user to control the overall volume of sound in a graphical presentation of sound or to control the volume at which an individual tone is sounded. To control the overall volume of sound, the user can adjust the volume by left-clicking the mouse on the slider bar, labeled “Vol:,” on the dialog box toolbar of
Another feature of a preferred embodiment allows a user to pause and restart a graphical presentation. This allows a user to stop the movement of tracking objects on the display screen canvas, stop tones from sounding, etc. and to resume movement and sound at a desired time. To pause the presentation, the user presses the “Pause” button at the bottom of the dialog box, the “Pause” button changes to a “Play” button. To restart the presentation, the user simply presses the “Play” button.
A further feature of the present invention allows a user to remove the tool bar from the Muse 1.0 dialog window. To remove the toolbar, the user can either select “Hide” button from the toolbar, press the “F3” function key on a PC keyboard, or select the “Tools” button on the toolbar and then select the “Show/Hide Toolbar (F3)” menu option. To make the toolbar visible, the user can press the “F3” function key again or select the “Tools” button on the toolbar and then select the “Show/Hide Toolbar (F3)” menu option.
Once the user finalizes a visual presentation of sound, the user can save the presentation in a musical artwork file for playback at a later time. In a preferred embodiment, as shown in
In
A preferred embodiment of the Muse 1.0 program provides an additional feature that allows a user to simulate playing a piano by playing music directly from a PC keyboard, or any other keyboard device etc. In this preferred embodiment, the keys from the top two rows of a PC keyboard are mapped to the keys of a piano as follows:
As an additional feature, holding the SHIFT key causes the PC keyboard keys numbered 7 and greater to increase by an octave, while causing the PC keyboard keys numbered 6 and less to drop octave.
Other user input devices such a trackball, touch-screen, digitizing tablet, etc. can be used. In general, the computer system is illustrative of but one type of computer system, such as a desktop computer, suitable for use with the present invention. Computers can be configured with many different hardware components and can be made in many dimensions and styles (e.g. laptop, palmtop, pentop, server, workstation, mainframe). Any hardware platform suitable for performing the processing described herein is suitable for use with the present invention.
In
As with the external physical configuration shown in
In
In
Similarly, other computers at 84 are shown utilizing a local network at a different location from USER1 computer. The computers at 84 are coupled to the Internet via Server2. USER3 and Server3 represent yet a third installation.
Note that the concepts of “client” and “server,” as used in this application and the industry are very loosely defined and, in fact, are not fixed with respect to machines or software processes executing on the machines. Typically, a server is a machine or process that is providing information to another machine or process, i.e., the “client,” that requests the information. In this respect, a computer process can be acting as a client at one point in time (because it is requesting information) and can be acting as a server at another point in time (because it is providing information). Some computers are consistently referred to as “severs' because they usually act as a repository for a large amount of information that is often requested. For example, a Word Wide Web (WWW, or simply, “Web”) site is often hosted by a server computer with a large storage capacity, high-speed processor and Internet link having the ability to handle many high-bandwidth communication lines.
A server machine will most likely not be manually operated by a human user on a continual basis, but, instead, has software for constantly, and automatically, responding to information requests. On the other hand, some machines, such desktop computers, are typically thought of as client machines because they are primarily used to obtain information from the Internet for a user operating the machine.
Depending on the specific software executing at any point in time on these machines, the machine may actually be performing the role of a client or server, as the need may be. For example, a user's desktop computer can provide information to another desktop computer. Or a server may directly communicate with another server computer. Sometimes this is characterized as “peer-to-peer,” communication. Although processes of the present invention, and the hardware executing the processes, may be characterized by language common to a discussion at the Internet (e.g. “client,” server,” “peer”) it should be apparent that software of the present invention can execute on any type of suitable hardware including networks other than the Internet.
Although software of the present invention may be presented as a single entity, such software is readily able to be executed on multiple machines. That is, there may be multiple instances of a given software program, a single program may be executing on two or more processors in a distributed processing environment, parts of a single program may be executing on different physical machines, etc. Further, two different programs, such as a client and sever program, can be executing in a single machine, or in different machines. A single program can be operating as a client for one information transaction and as a server for a different information transaction.
Table I, below, shows a list of source code files provided on a CD-ROM as a Source Code Appendix for this application. The files are on one CD-ROM. Two identical copies of the CD-ROM are provided. The files were recorded using an International Business Machines (IBM) compatible personal computer running Microsoft™ Windows XP™ operating system and can be viewed with compatible equipment. All files are ASCII format. File extensions include .inf, .tcl, .mus, .tkd.
Although the invention has been described with respect to particular embodiments thereof, these embodiments are merely illustrative and not restrictive of the invention. For example, although the invention has been presented in connection with specific database applications it should be apparent that any conceivable database application can benefit from features of the present invention.
A “term” or “search term” can include any condition, operator, symbol, name, phrase, keyword, meta-character (e.g., a “wild card” character), function call, utility, database language construct or other mechanism used to facilitate a search of data. It should be apparent that many traditional techniques used in database query and results presentation can be used to advantage with features of the present invention. Search terms need not be limited to a single text input but can include multiple lines of functional text or other information.
In some embodiments not all of the steps discussed herein need be used. Many such variations will be apparent to one of skill in the art.
Note that although specific means of user input and output are presented, any suitable input or output devices or approaches can be suitable for use with the present invention. For example, any number and type of text boxes, menus, selection buttons, or other controls can be used in any arrangement produced by any suitable display device. User input devices can include a keyboard, mouse, trackball, touchpad, data glove, etc. Display devices can include electronic displays, printed or other hardcopy or physical output, etc. Although the user interfaces of the present invention have been presented primarily as web pages, any other format, design or approach can be used. User input and output can also include other forms such as three-dimensional representations and/or audio. For example, voice recognition and voice synthesis can be used. In general, any input or output device can be employed.
Any suitable programming language can be used to implement the routines of the present invention including C, C++, Java, assembly language, etc. Different programming techniques can be employed such as procedural or object oriented. The routines can execute on a single processing device or multiple processors. The functions of the invention can be implemented in routines that operate in any operating system environment, as standalone processes, in firmware, dedicated circuitry or as a combination of these or any other types of processing.
Steps can be performed in hardware or software, as desired. Note that steps can be added to, taken from or modified from the steps presented in this specification or Figures without deviating from the scope of the invention. In general, descriptions of functional steps, such as in tables or flowcharts are only used to indicate one possible sequence of basic operations to achieve a functional aspect of the present invention. Functioning embodiments of the invention may be realized with more or less processing than is described herein.
In the description herein, numerous specific details are provided, such as examples of components and/or methods, to provide a thorough understanding of embodiments of the present invention. One skilled in the relevant art will recognize, however, that an embodiment of the invention can be practiced without one or more of the specific details, or with other apparatus, systems, assemblies, methods, components, materials, parts, and/or the like. In other instances, well-known structures, materials, or operations are not specifically shown or described in detail to avoid obscuring aspects of embodiments of the present invention.
A “computer” for purposes of embodiments of the present invention may be any processor-containing device, such as a mainframe computer, a personal computer, a laptop, a notebook, a microcomputer, a server, personal digital assistant (PDA), cell phone or other hand-held processor, or any of the like. A “computer program” may be any suitable program or sequence of coded instructions that are to be inserted into a computer, well known to those skilled in the art. Stated more specifically, a computer program is an organized list of instructions that, when executed, causes the computer to behave in a predetermined manner. A computer program contains a list of ingredients (called variables) and a list of directions (called statements) that tell the computer what to do with the variables. The variables may represent numeric data, text, or graphical images.
A “computer-readable medium” or “machine-readable medium” for purposes of embodiments of the present invention may be any medium that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, system or device. The computer readable medium can be, by way of example only but not by limitation, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, system, device, propagation medium, or computer memory.
A “processor” or “process” includes any human, hardware and/or software system, mechanism or component that processes data, signals or other information. A processor can include a system with a general-purpose central processing unit, multiple processing units, dedicated circuitry for achieving functionality, or other systems. Processing need not be limited to a geographic location, or have temporal limitations. For example, a processor can perform its functions in “real time,” “offline,” in a “batch mode,” etc. Portions of processing can be performed at different times and at different locations, by different (or the same) processing systems.
A “server” may be any suitable server (e.g., database server, disk server, file server, network server, terminal server, etc.), including a device or computer system that is dedicated to providing specific facilities to other devices attached to a network. A “server” may also be any processor-containing device or apparatus, such as a device or apparatus containing CPUs. Although the invention is described with respect to a client-server network organization, any network topology or interconnection scheme can be used. For example, peer-to-peer communications can be used.
Reference throughout this specification to “one embodiment”, “an embodiment”, or “a specific embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention and not necessarily in all embodiments. Thus, respective appearances of the phrases “in one embodiment”, “in an embodiment”, or “in a specific embodiment” in various places throughout this specification are not necessarily referring to the same embodiment. Furthermore, the particular features, structures, or characteristics of any specific embodiment of the present invention may be combined in any suitable manner with one or more other embodiments. It is to be understood that other variations and modifications of the embodiments of the present invention described and illustrated herein are possible in light of the teachings herein and are to be considered as part of the spirit and scope of the present invention.
Further, at least some of the components of an embodiment of the invention may be implemented by using a programmed general purpose digital computer, by using application specific integrated circuits, programmable logic devices, or field programmable gate arrays, or by using a network of interconnected components and circuits. Any communication channel or connection can be used such as wired, wireless, optical, etc.
It will also be appreciated that one or more of the elements depicted in the drawings/figures can also be implemented in a more separated or integrated manner, or even removed or rendered as inoperable in certain cases, as is useful in accordance with a particular application. It is also within the spirit and scope of the present invention to implement a program or code that can be stored in a machine-readable medium to permit a computer to perform any of the methods described above.
Additionally, any signal arrows in the drawings/Figures should be considered only as exemplary, and not limiting, unless otherwise specifically noted. Furthermore, the term “or” as used herein is generally intended to mean “and/or” unless otherwise indicated. Combinations of components or steps will also be considered as being noted, where terminology is foreseen as rendering the ability to separate or combine is unclear.
As used in the description herein and throughout the claims that follow, “a”, an and “the” includes plural references unless the context clearly dictates otherwise. Also, as used in the description herein and throughout the claims that follow, the meaning of “in” includes “in” and “on” unless the context clearly dictates otherwise.
The foregoing description of illustrated embodiments of the present invention, including what is described in the Abstract, is not intended to be exhaustive or to limit the invention to the precise forms discussed herein. While specific embodiments of, and examples for, the invention are described herein for illustrative purposes only, various equivalent modifications are possible within the spirit and scope of the present invention, as those skilled in the relevant art will recognize and appreciate. As indicated, these modifications may be made to the present invention in light of the foregoing description of illustrated embodiments of the present invention and are to be included within the spirit and scope of the present invention.
Thus, while the present invention has been described herein with reference to particular embodiments thereof, a latitude of modification, various changes and substitutions are intended in the foregoing disclosures, and it will be appreciated that in some instances some features of embodiments of the invention will be employed without a corresponding use of other features without departing from the scope and spirit of the invention as set forth. Therefore, many modifications may be made to adapt a particular situation or material to the essential scope and spirit of the present invention. It is intended that the invention not be limited to the particular terms used in following claims and/or to the particular embodiment disclosed as the best mode contemplated for carrying out this invention, but that the invention will include any and all embodiments and equivalents falling within the scope of the appended claims.
The scope of the invention is to be determined solely by the appended claims.