1. Field of the Invention
This invention relates to performance information edit and playback apparatuses that edit performance information to play back automatic performance and automatic accompaniment by computer music systems such as electronic musical instruments.
2. Description of the Related Art
Conventionally, engineers design and propose computer music systems such as electronic musical instruments that reproduce performance information, which is called style data containing tone pitch data and timing data, to play back automatic accompaniment. The style data contain multiple parts for percussion instruments, accompaniment, etc. In addition, users of the electronic musical instruments can create or edit user's performance data containing multiple parts, by which musical performance is to be played back.
As for edit and playback of the performance information, the conventional apparatuses provide various functions, which are described below.
That is, user's performance data are created using style data containing multiple parts. Herein, a copy function is provided to copy the style data as the user's performance data on a storage, wherein the style data represent styles each having one or more parts. Conventionally, the style data are written to a storage area of the user's performance data by units of styles respectively. Namely, all parts of the style are collectively written to the storage area.
In a playback mode, there is provided a function that enables simultaneous reproduction of the user's performance data and style data.
In a record mode, there is provided a function in which the user designates a certain part of the user's performance data by operating a record switch and a start switch so that performance data are to be recorded on the storage or media with respect to the designated part.
However, the conventional apparatuses bear various problems with regard to the aforementioned functions. As for the copy function in which the style data are copied (or written) into the user's performance data, for example, all parts of the style are collectively written to the storage area of the user's performance data. This raises an inconvenience in which the user is unable to create ‘frequently-used’ performance data by copying a selected part (or selected parts) of the style.
In addition, the conventional apparatuses are restricted in functions such that the user's performance data and style data are simultaneously reproduced. This raises a problem in which the user is unable to always play back musical performance in a desired manner. In some cases, both of the style data and user's performance data contain parts that are assigned to a same tone-generation channel of a sound source in a duplicate manner. In those cases, the apparatus plays back a musical tune containing merging of the parts which are subjected to duplicate assignment to the same tone-generation channel of the sound source in the duplicate manner. Therefore, the user may feel inconvenience due to unintentional merging of parts that occur in the musical tune being played back.
In the case of the record mode that enables recording upon user's operations regarding the record switch and start switch, the conventional apparatus does not provide distinction in display between a recording part, which is set to a record mode, and a non-recording part which is not set to the record mode. The conventional apparatus does not provide a distinction between the aforementioned parts in display, so that the user is unable to visually grasping whether the recording is actually performed on the performance data or not. This raises an inconvenience for the user due to inadequate display as representation of the recording status.
It is an object of the invention to provide a performance information edit and playback apparatus that are improved in functions to provide conveniences for the user who edits, records and plays back performance information containing user's performance data and style data.
A performance information edit and playback apparatus is actualized by loading programs into a computer having a display and a storage that stores user's performance data containing multiple parts and plenty of style data each of which contains multiple constituent parts. On the screen of the display, there are provided a performance data window showing contents of the multiple parts of the user's performance data and a style data window showing content of desired style data that is selected by the user. Thus, the user is able to copy a constituent part of the desired style data in the style data window to a specific part within the multiple parts of the user's performance data in the performance data window. Herein, tone pitches of the copied constituent part of the desired style data are automatically modified to suit to chord information that is previously allocated to a chord sequence in the performance data window. In addition, a length of the copied constituent part of the desired style data is automatically adjusted to match with the specific part of the user's performance data by units of measures. The recording on the specific part of the user's performance data is started upon user's operations of a record switch and a start switch on the screen of the display.
In addition, the user is able to alternatively select one of the specific part of the user's performance data and the constituent part of the desired style data, both of which are allocated to a same tone-generation channel. Thus, it is possible to avoid occurrence of merging between the aforementioned parts in playback of a musical tune.
Further, the apparatus performs discrimination as to whether the specific part corresponds to a recording part, which is set to a record mode, or a non-recording part which is not set to the record mode. In response to the discrimination result, the start switch is changed in a display manner (e.g., color) on the screen.
These and other objects, aspects and embodiment of the present invention will be described in more detail with reference to the following drawing figures, of which:
This invention will be described in further detail by way of examples with reference to the accompanying drawings.
The detection circuit 6 operates as an input interface for inputting operation events of a mouse and a keyboard 11. The display circuit 7 is actualized by a video card or video chip, which performs display control on the display 12. The communication interface 8 provides connections with a local area network (LAN) or the Internet, or it is connected with a communication network 13 via telephone lines, for example. That is, the personal computer can perform communications with server computers (not shown) by means of the communication interface 8. The MIDI interface 9 is connected with a sound source device (or MIDI device) 14, by which the personal computer can perform communications based on the MIDI standard. In a playback mode of user's performance data and style data, the personal computer provides MIDI data that are supplied to the sound source device 14 via the MIDI interface 9. Based on the MIDI data, the sound source device 14 activates the sound system 15 to generate musical tones. The timer 4 generates interrupt signals that are used to perform interrupt processes for playback and recording. In addition, the timer 4 generates various types of clock signals that are used to perform interrupt processes for detecting operation events of the keyboard.
The CPU 1 uses a working area of the RAM 3 to perform normal controls in accordance with the operating system (OS) that is installed on hard disks of a hard disk drive (HDD), which corresponds to the external storage device 5. As the normal controls, the CPU 1 controls the display 12, and it inputs data in response to user's operations of the mouse and keyboard 11. In addition, the CPU 1 controls the position of a mouse pointer (or mouse cursor) on the screen of the display 12, and it detects user's click operations of the mouse. Thus, user's input and setting operations are implemented using the mouse 11 and the display 12 by the so-called graphical user interface (GUI).
As the external storage device 5, it is possible to use a floppy disk drive (FDD), a hard-disk drive (HDD), a magneto-optical disk (MO) drive, a CD-ROM drive, a digital versatile disk (DVD) drive, or else. The external storage device 5 provides the personal computer with performance information edit and playback programs, details of which will be described later. In addition, the external storage device 5 is used to save user's performance data that the user creates according to needs. Further, the external storage device 5 can be used as databases for tune template data and style data, which are basic information for the user to create the user's performance data.
Connecting the communication interface 8 with the communication network 13, the personal computer can download from the server computer, performance information edit and playback programs as well as various types of data such as tune template data and style data. The present embodiment is designed such that the hard disks of the hard-disk drive (HDD) corresponding to the external storage device 5 are used to store the performance information edit and playback programs, tune template data and style data. So, the CPU 1 expands the performance information edit and playback programs stored on the hard disks into the RAM 3, according to which it executes performance information edit and playback processes.
Each of the three parts 1, 2, 3 contains initial information such as the tone color and tempo, which is followed by pairs of timings and musical tone events. Following the initial information, each part sequentially describes the timings and musical tone events, then, it finally describes end data. The style sequence is used to sequentially read styles in accordance with progression of musical performance, wherein it sequentially describes pairs of timings and designation events, then, it finally describes end data. The chord sequence is used to designate chords in accordance with progression of the musical performance, wherein it sequentially describes pairs of timings and chord events, then, it finally describes end data. Herein, the chord events represent chord types, roots and bass notes, for example. The aforementioned parts 1, 2, 3 and the style sequence and chord sequence are recorded on different tracks respectively, so they are sometimes referred to as tracks in the following description.
The aforementioned user's performance data assign the accompaniment part and percussion instrument part to the parts 2 and 3 respectively, wherein the style sequence is also used to read the style data that allow playback of the accompaniment part and percussion instrument part. Without using the parts 2 and 3, the user's performance data allow generation of accompaniment sounds (and percussion instrument sounds) in addition to the melody of the musical tune by using the melody part (part 1) together with the style sequence and chord sequence. Without using the style sequence, the user's performance data allow generation of accompaniment sounds in addition to the melody of the musical tune by merely using the parts 1, 2 and 3. In this case, it is necessary to create data for the accompaniment part and percussion instrument part as the parts 2 and 3 respectively. Herein, the present embodiment allows copying of a desired part of the style data to the user's performance data. Thus, the present embodiment can assist the user to create or edit a variety of performance data.
Incidentally, the parts 2 and 3 of the user's performance data respectively correspond to the parts 2 and 3 of the style data, wherein each of the parts 2 and 3 is assigned to the same tone-generation channel in a duplicate manner. For this reason, if both of the user's performance data and style data are simultaneously reproduced, a musical tune is played back with merging of the corresponding parts that are assigned to the same tone-generation channel in the duplicate manner.
In the performance data window W1, a PART-NAME area contains three sections that describe names of parts 1, 2 and 3 respectively. Correspondingly, an REC-PART area contains three sections in connection with the parts 1, 2 and 3, wherein each section indicates a record mode setting status with respect to each part by using a small circle mark.
The system of the present embodiment proceeds to recording when the user clicks the record switch SW1 and the start switch SW2 with the mouse. After that, input data are written to the part(s) under the record mode. Before the recording, the start switch SW2 is displayed on the screen as shown by (A) of
A PERFORMANCE-DATA area contains three elongated-rectangular areas in connection with the parts 1–3 respectively, wherein each area shows contents of performance data with respect to each track. Herein, a horizontal direction directing from the left to the right on the screen indicates a lapse of time, along which each area is partitioned into sections using delimiter lines L corresponding to bar lines, for example. Elongated circular bars called “blocks” are displayed on the section(s) of the areas, wherein each of them indicates content of performance data with respect to the corresponding part. By double clicks, it is possible to select each of the blocks displayed on the sections corresponding to measures in the PERFORMANCE-DATA area, so that detailed content of the selected block is to be displayed on the screen. This allows the user to edit the content of the performance data corresponding to the selected block in the PERFORMANC-DATA area on the screen. The performance data window W1 also contains two areas for the style sequence and chord sequence. Herein, the style sequence area is divided into three sections corresponding to measures, on which elongated circular bars (or blocks) are displayed to show names of styles. The chord sequence area is also divided into three sections corresponding to measures, on which elongated circular bars (or blocks) are displayed to show names of chords.
In the track of the part 1 shown in
Incidentally, the performance data window W1 merely shows general names regarding performance data, styles and chords such as ‘user record’, ‘style A’, ‘style B’, ‘style C’, ‘chord A’, ‘chord B’, ‘chord C’ and ‘chord D’. Actually, the window W1 shows their concrete names, which are designated by the user or else. Particularly, the names of ‘chord A’ to ‘chord D’ do not designate names of roots of chords.
The input box B named ‘STYLE-SELECT’ is an area of a list menu form that allows the user to select a desired style. Clicking a down button, the input box B shows a list box showing names of styles. Clicking a desired style from among the styles of the list box, the input box B shows the desired style selected by the user.
The user is able to paste a desired block (namely, desired part of the selected style) of the style data window W2 onto a desired position within the aforementioned sections of the PERFORMANCE-DATA area of the performance data window W1 by drag and drop operations with the mouse. That is, the user clicks the desired block of the style data window W1, so that the user drags and then drops it to the desired position, namely the desired section within the PERFORMANCE-DATA area of the performance data window W1.
Incidentally, a length of the block of ‘part 2 of style A’ is shorter than a length of one section in the PERFORMANCE-DATA area. Copying the block to the section one time, the system of the present embodiment automatically provides repetition of the block so that the part of the style A is extended to match with the length of the section. In the track of the part 3 of the PERFORMANCE-DATA area, the user copies a block of ‘part 3 of style C’ (which is similar to the block of ‘part 3 of style A’) to each of the three sections respectively, so that the block is extended entirely over the three sections on the screen.
As described above, the present embodiment allows the user to write performance data of a desired part of the style data into the user's performance data in the PERFORMANCE-DATA area on the screen.
Clicking the stop switch SW3, it is possible to stop playback and recording on the screen.
Clicking the mode select switch SW4, it is possible to change over performance data, which are selectively read for the accompaniment part. Namely, the mode select switch SW4 is used to change over between the style data and user's performance data with regard to selection for the parts 2 and 3. At a user mode, the system of the present embodiment selects parts 2 and 3 of the user's performance data. At a style mode, the system selects parts 2 and 3 of the style data. The mode select switch SW4 is changed in a display manner (e.g., color) in response to the modes respectively. In
Next, detailed operations of the performance information edit and playback programs that are executed by the CPU 1 will be described with reference to flowcharts of
In step S4, a decision is made as to whether a click event occurs on the start switch SW1 or not. If the CPU 1 does not detect the click event, the flow proceeds to step S6. If the CPU 1 detects the click event on the start switch SW1, the flow proceeds to step SW5 in which the start switch SW1 is adequately changed in the display manner as shown by (A)–(C) of
In step S6, the CPU 1 performs an edit process, details of which are shown in
Due to the aforementioned step S5, it is possible to change the start switch SW2 in the display manner (see
Next, a description will be given with respect to an edit process with reference to
In step S16, performance data of the moved block (containing tone pitches modified by the foregoing step S15) are recorded on the specific part of the user's performance data (see
In step S17, the CPU 1 performs other processes, then, it reverts control to the main routine shown in
By the foregoing steps S14, S15 and S16 shown in
In step S24, a decision is made as to whether the REC flag is set to ‘1’ or not. If the REC flag is set to ‘0’ designating a non-recording mode, the flow reverts control to the original routine. If the REC flag is set to ‘1’ designating a recording mode in progress, the flow proceeds to step S25 in which information of the input buffer is recorded on the specific part, which is under the record condition, as events of performance data together with their timing data. Then, the flow reverts control to the original routine. As the input buffer, it is possible to use a buffer that successively stores information of musical performance that the user plays on the electronic musical instrument (not shown) connected with the MIDI interface 9, for example. Herein, the input buffer records information of user's performance every interrupt timing thereof The temporarily stored content of the input buffer is cleared every time the CPU 1 performs a recording process of the specific part in the foregoing step S25. Thus, it is possible to create the user record part, namely a block of performance data that is arranged in the second and third sections of the track of the part 1 or the third section of the track of the part 2 shown in
Because of the alternative execution of the steps S22 and S23, the system of the present embodiment does not simultaneously reproduce the part that is repeated between the user's performance data and style data. Therefore, it is possible to play back a musical tune precisely in response to user's instructions or commands.
It is possible to modify the present embodiment within the scope of the invention in a variety of manners, which will be described below.
Detailed contents of the parts of the user's performance data are not necessarily limited to ones as described in the present embodiment. However, it is preferable that prescribed part numbers are allocated to part types (namely, types of musical instruments) in advance as described in the present embodiment.
In addition, a number of parts included in the user's performance data is not necessarily limited to the aforementioned number (i.e., three) of the present embodiment, hence, it is possible to arbitrarily set a desired number of parts included in the user's performance data. Herein, it is required to establish correspondence between the parts included in the user's performance data and parts included in the style data. The present embodiment sets the same part numbers to represent correspondence between the prescribed parts of the user's performance data and the parts of the style data. Alternately, it is possible to set the same part between the user's performance data and style data with respect to the same tone color.
The present embodiment describes such that multiple types of style data are stored with respect to each genre of music. It is possible to store multiple types of style data with respect to each variation (e.g., intro, fill-in, main, ending, etc.) and each genre of music.
The present embodiment describes such that the style data consists of data of multiple parts. It is possible to configure the style data to include an optimal chord sequence in addition to the data of multiple parts. In that case, when a style block designating a specific part contained in the style data is pasted onto a desired section of the user's performance data, it is modified in tone pitch based on the chord sequence of the style data.
The present embodiment can be modified to allow writing of the style data into the user's performance data restrictively with respect to the same part therebetween.
The present embodiment can be modified to allow the user to set the record mode on the style sequence and chord sequence as well.
The present embodiment uses the prescribed data format of musical tone events for describing details of parts of the style data, which are recorded on designated parts of the user's performance data. Instead, it is possible to use a simple format of data that merely designate the specific part of the style data.
The overall system of the present embodiment is configured using a personal computer that runs software programs regarding performance information edit and playback processes. Of course, this invention is applicable to electronic musical instruments simulating various types of musical instruments such as keyboard instruments, stringed instruments, wind instruments, percussion instruments, etc. In addition, this invention can be applied to automatic performance apparatuses such as player pianos. Further, this invention can be applied to various types of music systems, which are actualized by liking together sound source devices, sequencers and effectors by communication tools, MIDI interfaces, networks and the like.
As the format for describing the user's performance data, style data, style sequence and chord sequence, it is possible to use any one of the prescribed formats, examples of which are described below.
The present embodiment describes such that the performance information edit and playback programs are stored in the hard disks of the external storage device 5. If functions of the personal computer PC shown in
The present embodiment shown in
This invention has a variety of effects and technical features, which are described below.
As this invention may be embodied in several forms without departing from the spirit of essential characteristics thereof, the present embodiment is therefore illustrative and not restrictive, since the scope of the invention is defined by the appended claims rather than by the description preceding them, and all changes that fall within metes and bounds of the claims, or equivalence of such metes and bounds are therefore intended to be embraced by the claims.
Number | Date | Country | Kind |
---|---|---|---|
2000-115010 | Apr 2000 | JP | national |
Number | Name | Date | Kind |
---|---|---|---|
5326930 | Hayakawa | Jul 1994 | A |
5495072 | Kumagi | Feb 1996 | A |
5627335 | Rigopulos et al. | May 1997 | A |
5663517 | Oppenheim | Sep 1997 | A |
5723803 | Kurakake | Mar 1998 | A |
5739454 | Kunimoto | Apr 1998 | A |
5754851 | Wissner | May 1998 | A |
5801694 | Gershen | Sep 1998 | A |
5864079 | Matsuda | Jan 1999 | A |
5908997 | Arnold et al. | Jun 1999 | A |
6051770 | Milburn et al. | Apr 2000 | A |
6353170 | Eyzaguirre et al. | Mar 2002 | B1 |
6362411 | Suzuki et al. | Mar 2002 | B1 |
6424944 | Hikawa | Jul 2002 | B1 |
20040094017 | Suzuki et al. | May 2004 | A1 |
Number | Date | Country |
---|---|---|
H07-244478 | Sep 1995 | JP |
10133658 | May 1998 | JP |
H07-121179 | May 1999 | JP |
11-126068 | Nov 1999 | JP |
Number | Date | Country | |
---|---|---|---|
20010030659 A1 | Oct 2001 | US |