The present invention relates to a sound source system that combines music tone waveform generating modules made of software, and that generates music tone waveform data based on music tone waveform generating computation performed by each music tone waveform generating module. In addition, the present invention relates to a sound source waveform generating method that uses a general-purpose computation processing machine for executing a waveform computation algorithm so as to generate tone waveform data.
Conventionally, in order to generate a music tone according to a variety of music tone generating methods such as a waveform memory tone generating method and an FM tone generating method, a circuit for implementing the music tone generating method is constituted by dedicated hardware such as an LSI specifically designed for a sound source and a digital signal processor (DSP) that operates under the control of a fixed microprogram. The music tone generator constituted by the dedicated hardware is generically referred to as a hardware sound source hereafter. However, the hardware sound source requires dedicated hardware components, hence reduction of the product cost is difficult. It is also difficult for the hardware sound source to flexibly modify its specifications once the design has been completed.
Recently, as the computational performance of CPU has been enhancing, tone generators have been developed in which a general-purpose computer or a CPU installed on a dedicated tone generator executes software programs written with predetermined tone generation processing procedures to generate music tone waveform data. The tone generator based on the software programs is generically referred to as a software sound source hereafter.
Use of the hardware sound source in a computer system or a computer-based system presents problems of increasing the cost and decreasing the flexibility of modification. Meanwhile, the conventional software sound sources simply replace the capabilities of the dedicated hardware devices such as the conventional tone generating LSI. The software sound source is more flexible in modification of the specifications after completion of design than the hardware sound source. However, the conventional software sound source cannot satisfy a variety of practical demands occurring during vocalization or during operation of the sound source. These demands come from CPU performance, system environment, user preferences and user settings. To be more specific, the conventional software sound sources cannot satisfy the demands for changing fidelity of an outputted music tone waveform (not only the change to higher fidelity but also to lower fidelity) and demands for changing the degree of timbre variation (for example, change from normal timbre variation to subtle timbre variation or vice versa).
Recently, an attempt has been made to generate tone waveform data by operating a general-purpose processor such as a personal computer to run software programs and to convert the generated digital tone waveform data through a CODEC (coder-decoder) into an analog music tone signal for vocalization. The sound source that generates the tone waveform data by such a manner is referred to as the software sound source as mentioned before. Otherwise, the tone waveform data may be generated by an LSI dedicated to tone generation or by a device dedicated to tone generation having a digital signal processor (DSP) executing a microprogram. The sound source based on this scheme is referred to as the hardware sound source as mentioned before.
Generally, a personal computer runs a plurality of application software programs in parallel. Sometimes, a karaoke application program or a game application program is executed concurrently with a software sound source application program. This situation, however, increases a work load imposed on the CPU (Central Processing Unit) in the personal computer. Such an over load delays the generation of tone waveform data by the software sound source, thereby interrupting the vocalization of a music tone in the worst case. When the CPU is operating in the multitask mode, the above-mentioned concurrent processing may cause the tasks other than the tone generation task into a wait state.
In the hardware sound source, a waveform computation algorithm is executed by the DSP or the like to generate tone waveform data. The performance of the DSP for executing the computation has been enhanced every year, but the conventional tone waveform data generating method cannot make the most of the enhanced performance of the DSP.
It is therefore an object of the present invention to provide a sound source system based on computer software capable of reducing cost by generating a music tone by a software program without adding special dedicated hardware and, at the same time, capable of changing the load of a computation unit for computing music tone waveform and improving the quality of an output music tone.
It is another object of the present invention to provide a tone waveform data generating method that is capable of generating tone waveform data without interrupting the vocalization of a music tone even if the CPU load is raised high, and capable of, when the CPU is operating in the multitask mode, processing tasks not associated with the tone waveform generation without placing these tasks in a wait state.
It is still another object of the present invention to provide a tone waveform data generating method that makes a hardware sound source fully put forth its computational capability to provide the waveform output having higher precision than before.
The inventive sound source apparatus has operation blocks composed of softwares used to compute waveforms for generating a plurality of musical tones through a plurality of channels according to performance information. In the inventive apparatus, a setting device sets an algorithm which determines a system composed of selective ones of the operation blocks systematically combined with each other to compute a waveform specific to one of the musical tones. A designating device responds to the performance information for designating one of the channels to be used for generating said one musical tone. A generating device allocates the selective operation blocks to said one channel and systematically executes the allocated selective operation blocks according to the algorithm so as to compute the waveform to thereby generate said one musical tone through said one channel.
Preferably, the setting device sets different algorithms which determine different systems corresponding to different timbres of the musical tones. Each of the different systems is composed of selective ones of the operation blocks which are selectively and sequentially combined with each other to compute a waveform which is specific to a corresponding one of the different timbres.
Preferably, the setting device comprises a determining device that determines a first system combining a great number of operation blocks and corresponding to a regular timbre and that determines a second system combining a small number of operation blocks and corresponding to a substitute timbre, and a changing device operative when a number of operation blocks executable in the channel is limited under said great number and over said small number due to a load of the computation of the waveform for changing the musical tone from the regular timbre to the substitute timbre so that the second system is adopted for the channel in place of the first system.
Preferably, the setting device comprises an adjusting device operative dependently on a condition during the course of generating the musical tone for adjusting a number of the operation blocks to be allocated to the channel.
Preferably, the adjusting device comprises a modifying device that modifies the algorithm to eliminate a predetermined one of the operation blocks involved in the system so as to reduce a number of the operation blocks to be loaded into the channel for adjustment to the condition.
Preferably, the adjusting device operates when the condition indicates that an amplitude envelope of the waveform attenuates below a predetermined threshold level for compacting the system so as to reduce the number of the operation blocks.
Preferably, the adjusting device operates when the condition indicates that an output volume of the musical tone is tuned below a predetermined threshold level for compacting the system so as to reduce the number of the operation blocks.
Preferably, the adjusting device operates when the condition indicates that one of the operation blocks declines to become inactive in the system without substantially affecting other operation blocks of the system for eliminating said one operation block so as to reduce the number of the operation blocks to be allocated to the channel.
Preferably, the generating device comprises a computing device responsive to a variable sampling frequency for executing the operation blocks to successively compute samples of the waveform in synchronization to the variable sampling frequency so as to generate the musical tone, and a controlling device that sets the variable sampling frequency according to process of computation of the waveform by the operation blocks.
Preferably, the generating device comprises a computing device responsive to a variable sampling frequency for executing the operation blocks to successively compute samples of the waveform in synchronization to the variable sampling frequency so as to generate the musical tone, and a controlling device for adjusting the variable sampling frequency dependently on a load of computation of the waveform during the course of generating the musical tone.
Preferably, the generating device comprises a computing device responsive to a variable sampling frequency for executing the operation blocks to successively compute samples of the waveform in synchronization to the variable sampling frequency so as to generate the musical tone, and a controlling device for adjusting the variable sampling frequency according to result of computation of the samples during the course of generating the musical tone.
The inventive sound source apparatus has a software module used to compute samples of a waveform in response to a sampling frequency for generating a musical tone according to performance information. In the inventive apparatus, a processor periodically executes the software module for successively computing samples of the waveform corresponding to a variable sampling frequency so as to generate the musical tone. A detector device detects a load of computation imposed on the processor device during the course of generating the musical tone. A controller device operates according to the detected load for changing the variable sampling frequency to adjust a rate of computation of the samples.
Preferably, the controller device provides a fast sampling frequency when the detected load is relatively light, and provides a slow sampling frequency when the detected load is relatively heavy such that the rate of the computation of the samples is reduced by 1/n where n denotes an integer number.
Preferably, the processor device includes a delay device having a memory for imparting a delay to the waveform to determine a pitch of the musical tone according to the performance information. The delay device generates a write pointer for successively writing the samples into addresses of the memory and a read pointer for successively reading the samples from addresses of the memory to thereby create the delay corresponding to an address gap between the write pointer and the read pointer. The delay device is responsive to the fast sampling frequency to increment both of the write pointer and the read pointer by one address for one sample. Otherwise, the delay device is responsive to the slow sampling frequency to increment the write pointer by one address n times for one sample and to increment the read pointer by n addresses for one sample.
Preferably, the processor device includes a delay device having a pair of memory regions for imparting a delay to the waveform to determine a pitch of the musical tone according to the performance information. The delay device successively writes the samples of the waveform of one musical tone into addresses of one of the memory regions, and successively reads the samples from addresses of the same memory region to thereby create the delay. The delay device is operative when said one musical tone is switched to another musical tone for successively writing the samples of the waveform of said another musical tone into addresses of the other memory region and successively reading the samples from addresses of the same memory region to thereby create the delay while clearing the one memory region to prepare for a further musical tone.
Preferably, the processor device executes the software module composed of a plurality sub-modules for successively computing the waveform. The processor device is operative when one of the sub-modules declines to become inactive without substantially affecting other sub-modules during computation of the waveform for skipping execution of said one sub-module.
The inventive sound source apparatus has a software module used to compute samples of a waveform for generating a musical tone. In the inventive apparatus, a provider device variably provides a trigger signal at a relatively slow rate to define a frame period between successive trigger signals, and periodically provides a sampling signal at a relatively fast rate such that a plurality of sampling signals occur within one frame period. A processor device is resettable in response to each trigger signal and is operable based on each sampling signal to periodically execute the software module for successively computing a number of samples of the waveform within one frame. A detector device detects a load of computation imposed on the processor device during the course of generating the musical tone. A controller device is operative according to the detected load for varying the frame period to adjust the number of the samples computed within one frame period. A converter device is responsive to each sampling signal for converting each of the samples into a corresponding analog signal to thereby generate the musical tones.
The above and other objects, features and advantages of the present invention will become more apparent from the accompanying drawings, in which like reference numerals are used to identify the same or similar parts in several views.
This invention will be described in further detail by way of example with reference to the accompanying drawings.
Now, referring to
The OS is installed with a driver defining a software sound source module SSM. This module is a program for generating music tone waveform data based on the MIDI messages inputted via the first interface IF1. The OS also has a second interface IF2 denoted by WAVE out Application Interface or WAVE out API for receiving the music tone waveform data generated by the software sound source module SSM. Further, the OS is installed with an output device OUD which is a software driver for outputting the music tone waveform data inputted via the second interface IF2. To be more specific, this output device OUD reads, via a direct memory access (DMA) controller, the music waveform data generated by the software sound source module SSM and temporarily stored in a storage device such as a hard disk, and outputs the read music waveform data to a predetermined hardware device such as a CODEC.
The MIDI messages outputted by the sequencer software APS1 are supplied to an input interface of the software sound source module SSM via the first interface IF1 and the OS. The software sound source module SSM performs music tone waveform data generation processing. In the present embodiment, the music tone waveform data is generated by FM tone generating based on the received MIDI messages. The generated music tone waveform data is supplied to the output device OUD via the second interface IF2 and the OS. In the output device OUD, the supplied music tone waveform data is outputted to the above-mentioned CODEC to be converted into an analog music tone signal.
Thus, the present embodiment allows, at the OS level, ready combination of the software sound source module SSM for generation music tone waveform data and the sequencer software APS1 which is the application software for outputting MIDI messages. This makes it unnecessary to add any hardware components dedicated to music tone waveform data generation, resulting in reduced cost.
Referring to
The DMA controller 14a directly reads the music tone waveform data generated by the music tone generation processing from an output buffer of the RAM 3 in direct memory access manner dependently on a free space state of a data buffer incorporated in a DAC 14b. The DMA controller 14a transfers the read music tone data to the data buffer of the DAC 14b for sound reproducing process. The analog music tone signal converted by the DAC 14b is sent to a sound system 18, in which the analog music tone signal is converted into a sound.
The hard disk of the hard disk drive 6 stores the above-mentioned OS, utility programs, software for implementing a software sound source that is the above-mentioned software sound source module SSM, and other application programs including the above-mentioned sequencer software APS1.
The output device OUD mentioned in
The communication I/F 11 is connected to the communication network 101 such as a LAN (Local Area Network), the Internet, or a public telephone line. The communication I/F 11 is further connected to the server computer 102 via the communication network 101. If none of the above-mentioned programs and parameters are stored on the hard disk of the hard disk drive 6, the communication I/F 11 is used to download the programs and parameters from the server computer 102. A client computer (namely, the sound source system of the present embodiment) sends a command to the server computer 102 via the communication I/F 11 and the communication network 101 for requesting downloading of the programs and parameters. Receiving this command, the server computer 102 distributes the requested programs and parameters to the client computer via the communication network 101. The client computer receives these programs and parameters via the communication I/F 11, and stores the received programs and parameters in the hard disk of the hard disk drive 6, upon which the downloading operation is completed. In addition, an interface for transferring data directly between an external computer may be provided.
The following is an overview of the music tone generation processing based on FM tone generating by the software sound source module SSM with reference to
Referring to
It should be noted that, in addition to the music tone parameter VOICEk, these timber registers TONEPARk store data TM indicating a time at which the software sound source module SSM has received a MIDI message corresponding to the music tone parameter VOICEk. The data TM provides information for determining time positions of key-on and key-off operations within a predetermined frame of period.
Referring to
When the music tone waveform data for one frame has been generated by the music tone generation processing, the generated music tone waveform data is written to the output buffer of the RAM 3. Reproduction of the written data is reserved in the output device OUD. This reservation in the OUD is equivalent to the outputting of the generated music tone waveform data from the software sound source module SSM to the second interface IF2 (WAVE out API) of the OS level.
The output device OUD reads the music tone waveform data, a sample by sample, from the output buffer reserved for the reproduction in the immediately preceding frame, and outputs the data to the DAC 14b. For example, as shown in
The following is an overview of the music tone generation processing based on music tone parameter VOICEn. In this embodiment, the music tone generation processing is based on FM tone generating as shown in
The operator herein denotes a block that provides a unit in which tone creation or music tone generation processing is performed. To be more specific, from various basic waveform data used for the tone creation, one piece of basic waveforms shown in
The following explains a data format of the above-mentioned music tone parameter VOICEj.
As shown in
It should be noted that the music tone parameter VOICEj simultaneously has two types of data, one type read from the ROM 2, RAM 3, or the hard disk and the other type determined according to the data in the MIDI message. The data determined according to the MIDI message includes the key-on data KEYONj, the frequency number FNOj, the volume data VOLj, and the touch velocity data VELj. The data read from the ROM 2 and so on includes the algorithm designation data ALGORj and the operator data OPkDATAj.
As shown in
The sampling frequency designation data FSAMPm contains integer value f higher than “0”. This integer value f allows the sampling frequency FSMAX (for example, 44.1 kHz) in standard mode to be multiplied by 2−f. For example, if f=0, a music tone waveform in operator m is generated at the sampling frequency FSMAX of the standard mode; if f=1, a music tone waveform in operator m is generated at the sampling frequency of FSMAX/2.
The operator priority data OPPRIOm contains data (for example, members indicating the order by which waveform computing operations are performed) indicating the priority of the waveform computation processing in all operators k (k=1 to m). According to this priority data, the priority by which each operator is activated is determined for the waveform computation processing. Alternatively, the performance and load states of the CPU 1 are checked to determine the operators to be activated. If this check indicates that the CPU 1 has no more capacity for performing tone generation processing, the computation processing of the operators of lower priorities may be left out. In the present embodiment, the priorities of the computation processing are set according to timbre applied to the music tone. Alternatively, the priorities may be set according to MIDI channels for example. Namely, the priorities set by some reference may be selected for use at sounding. For example, if the priorities are not set according to the timbre, the operator priority data OPPRIOm may be determined based on the timbre parameter expanded in the above-mentioned timbre register TONEPARn. The operator priority data OPPRIOm may be handled also as to determine the setting that operator m is to be used or not.
In the present embodiment, the sampling frequency can be set for each operator m by the above-mentioned sampling frequency designation data FSAMPm. Alternatively, the sampling frequency may be set differently for the two types of the operators, the carrier and the modulator. For example, the carrier may be set to the above-mentioned frequency FSMAX and the modulator may be set to ½ of the FSMAX. In this case, the contents of the algorithm of the timbre parameter concerned are checked and the sampling frequency may be accordingly set for the operators with which the timbre parameter is combined. Alternatively, the load state of the CPU 1 is checked and the sampling frequency may be accordingly increased or decreased.
As shown in
As shown in
The MIDI-CH voice table is allocated at a predetermined area in the RAM 3. The table data, namely the voice numbers, are stored beforehand on the hard disk or the like in correspondence with the selected MIDI file. The user-selected MIDI file is loaded into a performance data storage area allocated at a predetermined location in the RAM 3. At the same time, the table data corresponding to the loaded MIDI file is loaded into the MIDI-CH voice table. Alternatively, the user can arbitrarily set the MIDI-CH voice table from the beginning or can change the table after standard voice numbers have been set to the music piece. MIDI messages are sequentially generated by the sequencer program APS1 and the generated MIDI messages are recognized by the software sound source module SSM. The software sound source module SSM then searches the MIDI-CH voice table for the voice number assigned to the MIDI channel of the MIDI message concerned. For example, if the MIDI channel of the MIDI message concerned is “2HC,” the voice number stored at the second location VOICEN02 in the MIDI-CH voice table is selected.
When voice number j is found, the software sound source module SSM generates music tone parameter VOICEj as described above. To be more specific, the software sound source module SSM reads the basic data from the ROM 2 and determines other parameters from the MIDI message concerned to generate the music tone parameter VOICEj shown in
As described above, the inventive sound source apparatus has the operation blocks OPs (shown in
Preferably, the setting device sets different algorithms which determine different systems corresponding to different timbres of the musical tones. Each of the different systems is composed of selective ones of the operation blocks which are selectively and sequentially combined with each other to compute a waveform which is specific to a corresponding one of the different timbres.
Preferably, the setting device comprises a determining device that determines a first system combining a great number of operation blocks and corresponding to a regular timbre and that determines a second system combining a small number of operation blocks and corresponding to a substitute timbre, and a changing device operative when a number of operation blocks executable in the channel is limited under said great number and over said small number due to a load of the computation of the waveform for changing the musical tone from the regular timbre to the substitute timbre so that the second system is adopted for the channel in place of the first system.
Preferably, the setting device comprises an adjusting device operative dependently on a condition during the course of generating the musical tone for adjusting a number of the operation blocks to be allocated to the channel.
Preferably, the adjusting device comprises a modifying device that modifies the algorithm to eliminate a predetermined one of the operation blocks involved in the system so as to reduce a number of the operation blocks to be loaded into the channel for adjustment to the condition.
Preferably, the adjusting device operates when the condition indicates that an amplitude envelope of the waveform attenuates below a predetermined threshold level for compacting the system so as to reduce the number of the operation blocks.
Preferably, the adjusting device operates when the condition indicates that an output volume of the musical tone is tuned below a predetermined threshold level for compacting the system so as to reduce the number of the operation blocks.
Preferably, the adjusting device operates when the condition indicates that one of the operation blocks declines to become inactive in the system without substantially affecting other operation blocks of the system for eliminating said one operation block so as to reduce the number of the operation blocks to be allocated to the channel.
Preferably, the generating device comprises a computing device responsive to a variable sampling frequency for executing the operation blocks to successively compute samples of the waveform in synchronization to the variable sampling frequency so as to generate the musical tone, and a controlling device that sets the variable sampling frequency according to process of computation of the waveform by the operation blocks.
Preferably, the generating device comprises a computing device responsive to a variable sampling frequency for executing the operation blocks to successively compute samples of the waveform in synchronization to the variable sampling frequency so as to generate the musical tone, and a controlling device for adjusting the variable sampling frequency dependently on a load of computation of the waveform during the course of generating the musical tone.
Preferably, the generating device comprises a computing device responsive to a variable sampling frequency for executing the operation blocks to successively compute samples of the waveform in synchronization to the variable sampling frequency so as to generate the musical tone, and a controlling device for adjusting the variable sampling frequency according to result of computation of the samples during the course of generating the musical tone.
The following explains the control processing to be performed by the sound source system thus constituted, with reference to
Then, the sound source module SSM checks to see whether any of the following triggers has taken place (step S13).
Trigger 1: the sequencer software APS1 has been started for supplying a MIDI message to the software sound source module SSM.
Trigger 2: an internal interrupt signal (a start signal) for starting execution of the waveform computation processing by the SSM has been generated by a software timer.
Trigger 3: a request has been made by the CODEC hardware for transferring the music tone waveform data from the output buffer to a buffer in the CODEC hardware.
Trigger 4: the user has operated the mouse 7 or the keyboard 8 and the corresponding operation event has been detected.
Trigger 5: the user has terminated the main routine and the corresponding operation event has been detected.
In step S14, the CPU 1 determines which of the above-mentioned triggers 1 through 5 has taken place. If the trigger 1 has been taken place, the software sound source module SSM passes control by the CPU 1 to step S16, in which a MIDI processing subroutine is executed. If the trigger 2 has been taken place, the software sound source module SSM passes control to step S17, in which a waveform computation processing subroutine is executed. If the trigger 3 has taken place, the process goes to step S18, in which the music tone waveform data is transferred from the output buffer to the buffer of the CODEC hardware. If the trigger 4 has taken place, the software sound source module SSM passes control to step S19, in which a timbre setting processing subroutine is executed especially if a timbre setting event has occurred; if another event has occurred, corresponding processing is performed in step S20. If the trigger 5 has taken place, the software sound source module SSM passes control to step S21, in which end processing such as returning the screen of the display 5 to the initial state provided before the main program was started. Then, any of the steps S16 through S21 has been ended, the software sound source module SSM passes control to step S12 to repeat the above-mentioned operations.
Next, in step S32, the software sound source module SSM determines whether the MIDI event is a note-on event. If the MIDI event is found a note-on event, the software sound source module SSM passes control to step S33; if not, the SSM passes control to step S40 shown in FIG. 12. In step S33, the SSM decodes the note-on event data and stores resultant note-number data, velocity value data and part number data (namely, the MIDI channel number) into registers NN, VEL, and p, respectively. Further, the SSM stores the data about the time at which the note-on event should take place into a register TM allocated at a predetermined position in the RAM 3. Hereafter, the contents of the registers NN, VEL, p, and TM are referred to as note number NN, velocity VEL, part p, and time TM, respectively.
In step S34, the software sound source module SSM determines whether velocity VEL is lower than a predetermined value VEL1 and whether volume data VOLp is lower than a predetermined value VOL1. The VOLp denotes the volume data of the part p stored in area VOLp allocated at a predetermined area in the RAM 3. This VOLp is changed by the control change #7 event of the MIDI message as explained with reference to FIG. 7A. The change is performed in the miscellaneous processing of step S20 when the source change #7 event has taken place. In step S34, if VEL≦VEL1 and VOL≦VOL1, the regular timbre allotted to the part p is replaced by a substitute timbre of an algorithm having a small number of operators, namely a small total number of carriers and modulators. That is, the voice number stored in VOICEp of the part p in the above-mentioned MIDI-CH voice table is replaced by the voice number of the music tone parameter VOICE having an alternate algorithm (step S35). If VEL>VEL1 or VOL>VOL1, the SSM skips step S35 and passes control to step S36. In the present embodiment, whether the processing of step S35 is to be performed is determined according to the values of velocity VEL and volume VOL. The decision may also be made by detecting the load state of the CPU 1 and according to the detection result, for example.
In step S36, channel assignment processing based on the note-on event concerned is performed. The channel number of the assigned sound channel is stored in register n allocated at a predetermined location in RAM 3. The contents stored in the register n are hereafter referred to as sound channel n. In step S37, the MIDI-CH voice table shown in
In step S38, the music tone parameter VOICEj generated in step S37 is transferred or expanded along with time TM into the timbre register TONEPARn corresponding to the sound channel n. At the same time, key-on data KEYONn in the timbre register TONEPARn and each operator-on parameter OPONm are set to “1” (on). Further, in step S39, the computational order is determined among the sound channels assigned for sounding such that music tone generating computations are performed in the order of note-on event occurrence times. To be more specific, the channel numbers are rearranged according to the determined computational order and the rearranged channel numbers are stored in CH sequence register CHSEQ allocated at a predetermined position in the RAM 3, upon which this MIDI processing comes to an end. The CH sequence register CHSEQ is illustrated in FIG. 13.
In step S40 of
In step S44, it is determined whether the MIDI event is a program change event for changing timbres. If the MIDI event is found a program change event, the data of VOICENOp at the position corresponding to the part p (this part p is not necessarily the part number stored in step S33) designated by the received program change event is changed to value PCHNG designated by the received program change event, upon which this MIDI processing comes to an end (step S45). On the other hand, if the MIDI event is found other than a program change event, the corresponding processing is performed, upon which this MIDI processing comes to an end.
In this MIDI processing, the timbres corresponding to a plurality of parts are designated in the MIDI-CH voice table. If a note-on event of a plurality of designated parts occurs, a music tone having timbres of the plurality of parts is generated and sounded. Namely, this MIDI processing uses multi-timbre operation specifications. Alternatively, this MIDI processing may use a single-timbre mode in which only a note-on event of a particular part is accepted to generate a music tone of the corresponding timbre.
Then, index i indicating a channel number is initialized to “1” (step S54). In step S55, the channel number SEQCHNOi stored in SEQCHi at i position in the CH sequence register CHSEQ shown in
Moreover, a computation amount in the current frame is determined according to the note events and the like (step S57). The determination of the computation amount actually denotes determining a net area of the music tone waveform buffer for which the waveform computation processing is to be performed in channel n. The music tone waveform buffer is the area sufficient to store waveform data of one frame time in which the current computation is made. On the other hand, the music tone waveform data of each channel is not necessarily generated all over the area for one frame. Namely, since the sounding timing and muting timing of music tones are different for different channels, a music tone of a certain channel may be turned on or off halfway in the music tone waveform buffer. In view of this, the computation amount must be determined for each channel.
Next, in step S58 of
In step S60, the music tone waveform data for one frame generated in steps S58 and S59 is written to the music waveform buffer. At this moment, if music tone waveform data is already stored in the music waveform buffer, the data obtained this time is accumulated to the existing data and a result of the addition is written to the music tone waveform buffer. Then, the value of index i is incremented by one (step S61) to determine whether the resultant value of index i is greater than the above-mentioned maximum number of channels CHmax (step S62).
In step S62, if i≦CHmax, or if there are more channels to be processed for the waveform generation, the SSM returns control to step S55, in which the above-mentioned processing operations are repeated. If i>CHmax, or if there is no channel to be processed, muting channel processing for gradually decreasing the size of a volume envelope is performed for the sound channel turned off this time (step S63). In step S64, the music tone waveform data thus generated is removed from the music tone waveform buffer, and the removed data is passed to the CODEC hardware which is an output device. Then, reproduction of the data is instructed, upon which this waveform computation processing comes to an end.
If the velocity value of channel n gets smaller than a predetermined value, the FM computation for that channel n may not be performed. In order to implement this operation, step S71 is provided after the above-mentioned step S55 as shown in FIG. 14. In step S71, it is determined whether touch velocity data VELn in the timbre register TONEPARn of channel n is higher than predetermined value VELn1. If VELn≧VELn1, the SSM passes control to step S56; if VELn<VELn1, key-off is designated for channel n in the similar manner as that of step S43 shown in FIG. 12. Then, the SSM passes control to step S61.
In step S83, if the operator computation processing for the operator m is to be performed, it is determined whether channel n is currently sounding continuously from the preceding frame (step S84). If channel n is found continuously sounding, based on each data stored in the buffer OPBUFm in the operator data OPmDATAn of the timbre register ONEPARn, the operator data OPmDATAn is returned to the state of the operator m at the end of computation of the preceding frame (step S85). The buffer OPBUFm in each operator data OPmDATAn holds the result obtained by the computation performed immediately before. Using this result allows the return to the state of the immediately preceding operator data OPmDATAn. The operator data OPmDATAn is returned to the state at the end of computation of the preceding frame because the music tone waveform data of channel n in the current frame must be generated as the continuation from the preceding frame.
On the other hand, if channel n is found not sounding continuously from the preceding frame in step S84, the SSM skips step S85 and passes control to step S86. In step S86, the operator computation processing subroutine for the operator m is executed. In step S87, the value of variable m is incremented by one. In step S88, if there are more operators to be processed, the SSM returns control to step S82, in which the above-mentioned processing operations are repeated. If there is no more operator to be processed, the FM computation processing for channel n comes to an end.
In steps S82 and S83, the load state of the CPU 1 is checked to determine whether the computation of the operator m is to be performed. Alternatively, the computation for the operators having lower priority may not be performed regardless of the load state of the CPU 1. This can increase the number of sound channels when the capacity of the CPU 1 is not so high.
In step S92, it is determined whether the sampling frequency designation data FSAMPm in the operator data OPmDATAn is “0” or not. Namely, it is determined whether a music tone waveform is to be generated at the sampling frequency FSMAX of standard mode. If FSAMPm=“0”, it indicates the standard mode in which each operator performs the music tone waveform generation at the standard sampling frequency. Then, AEGm computation is performed according to the setting value of the envelope parameter EGPARm in the operator data OPmDATAn. The result of this computation is stored in the EG state buffer EGSTATEm (step S93).
On the other hand, if FSAMPm≠“0”, for example, FSAMPm=f, the sampling frequency FSMAX of the standard mode is multiplied by 2−f and the music tone waveform generation is performed at the resultant frequency. Namely, in step S94, a parameter of which rate varies (hereafter referred to as a variable-rate parameter) in the envelope parameters EGPARm is multiplied by 2f to perform the AEG computation. The result is stored in the EG state buffer EGSTATEm. The rate of the variable-rate parameter is multiplied by 2f before the envelope generating computation for the following reason. Namely, since the sampling frequency is reduced to FSMAX×2−f, the time variation of the variable-rate parameter of the envelope parameter EGPARm is made faster to perform the music tone waveform generation at the sampling frequency concerned. Subsequently, the generated waveform samples are written to 2f continuous addresses of the buffer, thereby making adjustment such that the resultant music tone has the same pitch as that of the original music tone. Thus, step S93 or S94 performs the computation of envelope data AEGm as shown in FIG. 19.
In step S95, the data AEGm obtained by the AEGm computation is multiplied by the value of a total level parameter TLm in the operator data OPmDATAn to compute an output level AMPm (=AEGm×TLm) of the operator m as shown in FIG. 19. Then, the amplitude controlling envelope data AEGm computed in step S93 or S94 and the output level AMPm of the operator m computed in step S95 are checked independently (step S96). Based on the check results, it is determined whether the data value AEGm and the data value AMPm are lower than a predetermined time and a predetermined level, respectively, thereby determining in turn whether the operator m is to be operated or not (step S97). In other words, it is determined whether the music tone waveform computation in the operator m may be ended or not. If the decision is YES, the SSM passes control to step S98; if the decision is NO, the SSM passes control to step S101 shown in FIG. 18.
In step S98, it is determined whether the operator m is a carrier. If the operator m is found a carrier, the SSM passes control to step S99. In step S99, the buffer OPBUF for the operator m and the modulator modulating only the operator m are cleared, the waveform computation is stopped, and this operator computation processing is ended. Thus, if the operator m is a carrier, not only the waveform computation of the operator m but also the waveform computation of the modulator modulating only the operator m is stopped. The carrier is an operator that eventually outputs the music tone waveform data as shown in
In step S101 shown in
In step S105, a phase value update computation is performed. The updated result is stored in the phase value buffer PHBUFm (the contents thereof hereafter being referred to as phase value PHBUFm) in the operator data OPmDATAn of the operator m. The phase value update computation denotes herein the computation enclosed by dashed line A in FIG. 19. To be more specific, computation MODINm+FBm+FNOn×MULTm+PHBUFm is performed. MODINm and FBm denote the values stored in the modulation data input buffer MODINm in the operator data OPmDATAn and the feedback output value buffer FBm, respectively. FNOn denotes the frequency number FNOn in the music tone parameter VOICEn. MULTm denotes the frequency multiple data MULTm in the operator data OPmDATAn. PHBUFm denotes the last value of the values stored in the phase value buffer PHBUFm in the operator data OPmDATAn.
In step S106, a table address is computed based on the phase value PHBUFm computed in step S105. From the basic waveform (for example, a waveform selected from among the above-mentioned eight types of basic waveforms) data selected according to the wave select data WSELm of the operator m, data WAVEm (PHBUFm) at the position pointed by this computed address is read. It should be noted that basic waveform data is referred to “basic waveform table.” The data WAVEm (PHBUFm) is multiplied by the output level AMPm computed in step S95. The result is stored in the operator output value buffer OPOUTm (=WAVEm (PHBUFm)×AMPm) of the operator m.
In step S107, feedback sample computation is performed by the following relation, storing the result in the feedback output value buffer FBm of the operator m.
0.5×(FBm+OPOUTm×FBLm)
OPOUTm denotes the waveform sample data generated in step S106. FBLm denotes the feedback level data FBLm of the parameter m to be computed. The feedback sample computation is performed to prevent parasitic exciter from occurring.
In step S108, it is determined, as with step S98, whether the operator m is a carrier or not. If the operator m is found a modulator, this operator computation processing is ended immediately. On the other hand, if the operator m is found a carrier, the waveform sample data OPOUTm generated in step S106 is multiplied by the volume data VOLn of the music tone parameter VOICEn. The multiplication result (=OPOUTm×VOLn) is added to the position indicated by the pointer for pointing the write position of this time in the corresponding waveform buffer. Further, the value of this pointer is incremented by one (step S109), upon which this operator computation processing comes to an end.
In step S110, phase value update computation is performed, and the result is stored in the phase value buffer PHBUFm. This computation processing in step S110 differs from the computation processing in step S105 only in the added processing indicated by block B in FIG. 19. Since FSAMPm=f(≠0), the phase value must be shifted by f bits, or the value of the phase value buffer PHBUFm must be multiplied by 2f to change the read address of the basic waveform table to that obtained by multiplying the sampling frequency FSMAX by 2−f. Next, likewise step S106, waveform sample generation is performed by the following relation, storing the result in the operator output value buffer OPOUTm.
WAVEm(2f×PHBUFm)×AMPm
Then, likewise step S107, a feedback sample computation is performed (step S112).
In step S113, it is determined likewise step S108 whether the operator m is a carrier. If the operator m is found a modulator, this operator computation processing is immediately ended. If the operator m is found a carrier, the waveform sample data OPOUTm generated in step S111 is multiplied by the volume data VOLn of music tone parameter VOICEn. The result (=OPOUTm×VOLn) is added to 2f addresses of the buffer continued from the position indicated by the pointer in the above-mentioned waveform buffer. Then, the pointer is incremented by 2f (step S114), upon which this operator computation processing comes to an end. It should be noted that, when writing the plural pieces of sample data of the same value in step S114, interpolation may be made between the pieces of sample data as required, writing the resultant interpolation value to the above-mentioned areas.
In the present embodiment, as explained in steps S106 and S111, the values stored in the basic waveform table are used for the basic waveform data. Alternatively, the basic waveform data may be generated by computation. Also, the basic waveform data may be generated by combining table data and computation. For the address by which the basic waveform table is read in steps S106 and S10, the address obtained based on the phase value PHBUFm computed in steps S105 and S110 is used. Alternatively, the address obtained by distorting this phase value PHBUFm by computation or by a nonlinear characteristic table may be used.
It should be noted that the user may alternatively set the desired number of operators for each of MIDI channels. If the desired number of operators is set to the channel concerned when changing the voice numbers in the MIDI-CH voice table, the voice numbers corresponding to the music tone parameters VOICE equal to or lower than the number of operators may be displayed in a list. From among these voice numbers, the user may select and set desired ones. At this time, the desired number of operators set to the channel concerned may also be automatically changed. The voice numbers within the automatically changed number of operators may be displayed in a list. Moreover, when the user has changed the voice numbers in the MIDI-CH voice table, the total number of operators constituting the music tone parameters VOICE corresponding to the changed voice numbers may be checked. According to the load state of the CPU 1, warning that this timbre cannot be assigned to the channel concerned may be displayed. In addition to such a warning, the voice number of the channel concerned may be automatically changed to the voice number of an alternate timbre obtained by the smaller number of operators.
As described, the present embodiment is constituted such that the number of operators for use in the FM computation processing can be flexibly changed according to the capacity of the CPU 1, the operating environment of the embodiment, the purpose of use, and the setting of processing. Consequently, the novel constitution can adjust the load of the CPU 1 and the quality of output music tone waveforms without restriction, thereby significantly enhancing the degree of freedom of the sound source system in its entirety. In the present embodiment FM tone generating is used for the music tone waveform generation. It will be apparent that the present invention is also applicable to a sound source that performs predetermined signal processing such as AM (Amplitude Modulation) and PM (Phase Modulation) by combining music tone waveform generating blocks. Further, the CPU load mitigating method according to the invention is also applicable to a sound source based on waveform memory reading and to a physical model sound source in software approach. The present embodiment is an example of personal computer application. It will be apparent that the present invention is also easily applicable to amusement equipment such as game machines, karaoke apparatuses, electronic musical instruments, and general-purpose electronic equipment. Further, the present invention is applicable to a sound source board and a sound source unit as personal computer options.
The software associated with the present invention may also be supplied in disk media such as a floppy disk, a magneto-optical disk, and a CD-ROM, or machine-readable media such as a memory card. Further, the software may be added by means of a semiconductor memory chip (typically ROM) which is inserted in a computer unit. Alternatively, the sound source software associated with the present invention may be distributed through the communication interface I/F 11. It may be appropriately determined according to the system configuration or the OS whether the sound source software associated with the present invention is to be handled as application software or device software. The sound source software associated with the present invention or the capabilities of this software may be incorporated in other software; for example, amusement software such as game and karaoke and automatic performance and accompaniment software.
The inventive machine readable media is used for a processor machine including a CPU and contains program instructions executable by the CPU for causing the processor machine having operators in the form of submodules composed of softwares to compute waveforms for performing operation of generating a plurality of musical tones through a plurality of channels according to performance information. The operation comprises the steps of setting an algorithm which determines a module composed of selective ones of the submodules logically connected to each other to compute a waveform specific to one of the musical tones, designating one of the channels to be used for generating said one musical tone in response to the performance information, loading the selective submodules into said one channel, and logically executing the loaded selective submodules according to the algorithm so as to compute the waveform to thereby generate said one musical tone through said one channel.
Preferably, the step of setting sets different algorithms which determine different modules corresponding to different timbres of the musical tones. Each of the different modules is composed of selective ones of the submodules which are selectively and sequentially connected to each other to compute a waveform which is specific to a corresponding one of the different timbres.
Preferably, the step of setting comprises adjusting a number of the submodules to be loaded into the channel dependently on a condition during the course of generating the musical tone.
Preferably, the step of adjusting comprises compacting the module so as to reduce the number of the submodules when the condition indicates that an amplitude envelope of the waveform attenuates below a predetermined threshold level.
Preferably, the step of adjusting comprises compacting the module so as to reduce the number of the submodules when the condition indicates that an output volume of the musical tone is tuned below a predetermined threshold level.
Preferably, the step of adjusting comprises eliminating one submodule so as to reduce the number of the submodules to be loaded into the channel when the condition indicates that said one submodule loses contribution to computation of the waveform without substantially affecting other submodules.
The inventive machine readable media contains instructions for causing a processor machine having a software module to compute samples of a waveform in response to a sampling frequency for performing operation of generating a musical tone according to performance information. The operation comprises the steps of periodically operating the processor machine to execute the software module based on a variable sampling frequency for successively computing samples of the waveform so as to generate the musical tone, detecting a load of computation imposed on the processor machine during the course of generating the musical tone, and changing the variable sampling frequency according to the detected load to adjust a rate of computation of the samples.
Preferably, the step of changing provides a fast sampling frequency when the detected load is relatively light, and provides a slow sampling frequency when the detected load is relatively heavy such that the rate of the computation of the samples is reduced by 1/n where n denotes an integer number.
In the computation for generating the tone waveform data, a number of samples for one frame is generated for each sound channel. The tone waveform data for all sound channels are accumulated and written to a waveform output buffer. Then, reproduction of the waveform output buffer is reserved for the output device OUD. This reservation is equivalent to outputting of the generated tone waveform data from the software sound source module SSM to the second interface “WAVE out API.” The output device OUD reads, for each frame, the tone waveform data a sample by sample from the waveform output buffer reserved for reproduction, and sends the read tone waveform data to the DAC which is the external hardware. For example, from the waveform output buffer which is reserved for reproduction and written with the tone waveform data generated in the first frame from time T1 to time T2, the tone waveform data is read in the second frame from time T2 to Time T3. The read tone waveform data is converted by the DAC into an analog music tone waveform signal to be sounded from a sound system.
The ROM 2 stores the operating program and so on. The RAM 3 includes a parameter buffer area for storing various tone control parameters, a waveform output buffer area for storing music tone waveform data generated by computation, an input buffer area for storing a received MIDI message and a reception time thereof, and a work memory area used by the CPU 1. The display 5 and the display interface 4 provide means for the user to interact with the processing apparatus. The HDD 6 stores the operation system OS such as Windows 3.1 (registered trademark) or Windows 95 (registered trademark) of Microsoft Corp., programs for implementing the software sound source module, and other application programs for implementing “MIDI API” and “WAVE API.” A CD-ROM 7-1 is loaded in the CD-ROM drive 7 for reading programs and data from the CD-ROM 7-1. The read programs and data are stored in the HDD 6 and so on. In this case, a new sound source program for implementing a software sound source is recorded on the CD-ROM 7-1. The old sound source program can be upgraded with ease by the CD-ROM 7-1 which is a machine readable media containing instructions for causing the personal computer to perform the tone generating operation.
The digital signal processor board 9 is an extension sound-source board. This board is a hardware sound source such as an FM synthesizer sound source or a wave table sound source. The digital signal processor board 9 is composed of a DSP 9-1 for executing computation and a RAM 9-2 having various buffers and various timbre parameters.
The network interface 11 connects this processing apparatus to the Internet or the like via a LAN such as Ethernet or via a telephone line, thereby allowing the processing apparatus to receive application software such as sound source programs and data from the network. The MIDI interface 12 transfers MIDI messages between an external MIDI equipment and, receives MIDI events from a performance operator device 13 such as a keyboard instrument. The contents and reception times of the MIDI messages inputted through this MIDI interface 12 are stored in the input buffer area of the RAM 2.
The CODEC 14 reads the tone waveform data from the waveform output buffer of the RAM 3 in direct memory access manner, and stores the read tone waveform data in a sample buffer 14-1. Further, the CODEC 14 reads samples of the tone waveform data, one by one, from the sample buffer 14-1 at a predetermined sampling frequency FS (for example, 44.1 kHz), and converts the read samples through a DAC 14-2 into an analog music tone signal, thereby providing a music tone signal output. This tone output is inputted into the sound system for sounding. The above-mentioned constitution is generally the same as that of a personal computer or a workstation. The tone waveform generating method according to the present invention can be practiced by such a machine.
The following outlines the tone waveform generating method according to the present invention by means of the software sound source module under the control of the CPU 1. When the application program APS1 is started, MIDI messages are supplied to the software sound source module SSM via the first interface IF1. Then, the MIDI output driver of the software sound source module SSM is started to set tone control parameters corresponding to the supplied MIDI messages. These tone control parameters are stored in sound source registers of respective sound channels assigned with the MIDI messages. Consequently, a predetermined number of samples of waveform data are generated by computation in the sound source that is periodically activated every computation frame as shown in FIG. 22.
The physical model sound source shown in
It should be noted that delay times DRL, DRR, and DL read from a table according to the pitch of the music tone to be generated are set to the delay circuits DELAY-R, DELAY-RR, and DELAY-L, respectively. Filter parameters FRP and FRL for obtaining selected timbres are set to the lowpass filters FILTER-R and FILTER-L, respectively. In order to simulate the acoustic wave propagation mode that varies by opening or closing the tone hole, multiplication coefficients M1 through M4 corresponding to the tone hole open/close operations are supplied to the multipliers MU4 through MU7, respectively. In this case, the pitch of the output tone signal is generally determined by the sum of delay times to be set to the delay circuits DELAY-RL, DELAY-RR, and DELAY L. Since an operational delay time occurs on the lowpass filters FILTER-R and FILTER-L, a net delay time obtained by subtracting this operation delay time is distributively set to the delay circuits DELAY-RL, DELAY-RR, and DELAY-L in a.
The mouthpiece is simulated by a multiplier MU2 for multiplying a reflection signal coming from the circuit for simulating the right-side tube by multiplication coefficient J2 and a multiplier MU1 for multiplying a reflection signal coming from the circuit for simulating the left-side tube by multiplication coefficient J1. The output signals of the multipliers MU1 and MU2 are added together by an adder AD1, outputting the result to the circuits for simulating the right-side tube and the circuit for simulating the left-side tube. In this case, the reflection signals coming from the tube simulating circuits are subtracted from the output signals by subtractors AD2 and AD3, respectively, the results being supplied to the tube simulating circuits. An exciting signal EX OUT supplied from an exciter and multiplied by coefficient J3 is supplied to the adder D1. An exciter return signal EXT IN is returned to the exciter via an adder AD6. It should be noted that the exciter constitutes a part of the mouthpiece.
The output from this physical model sound source may be supplied to the outside at any portion of the loop. In the illustrated example, the output signal from the delay circuit DELAY-RR is outputted as an output signal OUT. The outputted signal OUT is inputted into an envelope controller EL shown in
The source model simulating a wind instrument has been explained above. In simulating a string instrument, a circuit for simulating a rubbed string section or a plucked string section in which a vibration is applied to a string is used instead of the circuit for simulating the mouthpiece. Namely, the signal P becomes an exciting signal corresponding to a string plucking force and a bow velocity, and the signal E becomes a signal equivalent to a bow pressure. It should be noted that, in simulating a string instrument, a multiplication coefficient NL2G supplied to the multiplier MU11 is made almost zero. Further, by setting the output of the nonlinear converter 2 to a predetermined fixed value (for example, one), the capability of the nonlinear converter 2 is not used. The delay circuits DELAY-RL, DELAY-RR, and DELAY-L become to simulate string propagation times. The lowpass filters FILTER-R and FILTER-L become to simulate string propagation losses. In the exciter, setting of the multiplication coefficients NLG1, NLG2, NL1, and NL2 allows the exciter to be formed according to a model instrument to be simulated.
The following explains various data expanded in the RAM 3 with reference to FIG. 27. As described above, when the software sound source module SSM is started, the MIDI output driver therein is activated, upon which various tone control parameters are stored in the RAM according to the inputted MIDI messages. Especially, if the MIDI messages designate a physical model sound source (also referred to as a VA sound source) as shown in
The buffer VATONEBUF stores the tone control parameter VATONEPAR as shown in FIG. 28. The VATONEBUF also stores a parameter SAMPFREQ indicating an operation sampling frequency at which samples of the tone waveform data are generated, a key-on flag VAKEYON which is set when a key-on event contained in a MIDI message designates the VA sound source, a parameter PITCH(VAKC) for designating a pitch, a parameter VAVEL for designating a velocity when the key-on event designates the VA sound source, and a breath controller operation amount parameter BRETH CONT. Moreover, the VATONEBUF has a pressure buffer PBUF for storing breath pressure and bow velocity, a PBBUF for storing a pitch bend parameter, an embouchure buffer EMBBUF for storing an embouchure signal or a bow pressure signal, a flag VAKONTRUNCATE for designating sounding truncate in the VA sound source, and a buffer miscbuf for storing volume and other parameters.
The parameter SAMPFREQ can be set to one of two sampling frequencies, for example. The first sampling frequency is 44.1 kHz and the second sampling frequency is a half of the first sampling frequency, namely 22.05 kHz. Alternatively, the second sampling frequency may be double the first sampling frequency, namely 88.2 kHz. These sampling frequencies are illustrative only, hence not limiting the sampling frequencies available in the present invention. Meanwhile, if the sampling frequency is reduced ½ times FS, the number of the tone waveform samples generated in one frame may be reduced by half. Consequently, if the load of the CPU 1 is found heavy, the sampling frequency of ½ times FS may be selected to mitigate the load of the CPU 1, thereby preventing dropping of samples from generation.
If the sampling frequency is set to 2 times FS, the number tone waveform samples generated is doubled, allowing the generation of high-precision tone waveform data. Consequently, if the load of the CPU 1 is found light, the sampling frequency of 2 times FS may be selected to generate samples having high-precision tone waveform data. For example, let the standard sampling frequency in the present embodiment be FS1, a variation sampling frequency FS2 is represented by:
FS1=n times FS2 (n being an integer) . . . first example,
FS1=1/n times FS2 (n being an integer) . . . second example.
Because the present invention mainly uses the first example, the following description will be made mainly with reference to the first example.
In the present invention, the sampling frequencies of the tone waveform data to be generated are variable. If there is another acoustic signal to be reproduced by the CODEC, the sampling frequency of the DA converter in the CODEC may be fixed to a particular standard value. For example, when mixing the music tone generated by the software sound source according to the present invention with the digital music tone outputted from a music CD, the sampling frequency may be fixed to FS1=44.1 kHz according to the standard of the CD. The following explains an example in which the sampling frequency of the CODEC is fixed to a standard value. The relation between this standard sampling frequency FS1 and the variation sampling frequency FS2 is represented by FS1=n times FS2 as described before. The sampling frequency of the DA converter is fixed to the standard value. Therefore, it is required for the waveform output buffer WAVEBUF which is read a sample by sample every period of this fixed standard sampling frequency FS1 to store beforehand a series of the waveform data in matching with the standard sampling frequency FS1 regardless of the sampling frequency selected for the waveform computation. If the sampling frequency FS2 which is 1/n of the sampling frequency FS1 is selected, the resultant computed waveform samples are written to the waveform output buffer WAVEBUF such that n samples of the same value are arranged on continuous buffer addresses. When the waveform data for one frame has been written to the waveform output buffer WAVEBUF, the contents of the waveform buffer WAVEBUF may be passed to the CODEC. Since the sampling frequency FSb of the data series stored in the waveform output buffer WAVEBUF differs from the operation sampling frequency FSc of the CODEC (or DAC), sampling frequency matching may be required. For example, if FSb=k times FSc (K>1), then the tone waveform data may be sequentially passed from the waveform output buffer WAVEBUF in skipped read manner by updating every n addresses. Namely, during the time from the processing of storing the music waveform samples in the waveform output buffer WAVEBUF to the processing of the DAC of the CODEC, a sampling frequency conversion circuit may be inserted to match the write and read sampling frequencies.
Information about the time at which storage is made in the MIDI event time buffer TM is required for performing the time-sequential processing corresponding to occurrence of note events. If the frame time is set to a sufficiently short value such as 5 ms or 2.5 ms, adjustive fine timing control for various event processing operations is not required substantially in the frame, so that these event processing operations need not be performed by especially considering the time information. However, it is preferable that the information from the breath controller and so on be handled on a last-in first-out basis, so that, for the event of this information, processing on the last-in first-out basis is performed by use of the time information. In addition to the above-mentioned buffers, the RAM 3 may store application programs.
As described above, the inventive sound source apparatus has a software module used to compute samples of a waveform in response to sampling frequency for generating a musical tone according to performance information. In the inventive apparatus, a processor device composed of the CPU 1 periodically executes the software module SSM for successively computing samples of the waveform corresponding to a variable sampling frequency so as to generate the musical tone. A detector device included in the CPU 1 detects a load of computation imposed on the processor device during the course of generating the musical tone. A controller device implemented by the CPU 1 operates according to the detected load for changing the variable sampling frequency to adjust a rate of computation of the samples.
Preferably, the controller device provides a fast sampling frequency when the detected load is relatively light, and provides a slow sampling frequency when the detected load is relatively heavy such that the rate of the computation of the samples is reduced by 1/n where n denotes an integer number.
The processor device includes a delay device having a memory for imparting a delay to the waveform to determine a pitch of the musical tone according to the performance information. The delay device generates a write pointer for successively writing the samples into addresses of the memory and a read pointer for successively reading the samples from addresses of the memory to thereby create the delay corresponding to an address gap between the write pointer and the read pointer. The delay device is responsive to the fast sampling frequency to increment both of the write pointer and the read pointer by one address for one sample. Otherwise, the delay device is responsive to the slow sampling frequency to increment the write pointer by one address n times for one sample and to increment the read pointer by n addresses for one sample.
The processor device may include a delay device having a pair of memory regions for imparting a delay to the waveform to determine a pitch of the musical tone according to the performance information. The delay device successively writes the samples of the waveform of one musical tone into addresses of one of the memory regions, and successively reads the samples from addresses of the same memory region to thereby create the delay. The delay device is operative when said one musical tone is switched to another musical tone for successively writing the samples of the waveform of said another musical tone into addresses of the other memory region and successively reading the samples from addresses of the same memory region to thereby create the delay while clearing the one memory region to prepare for a further musical tone.
Preferably, the processor device executes the software module composed of a plurality sub-modules for successively computing the waveform. The processor device is operative when one of the sub-modules declines to become inactive without substantially affecting other sub-modules during computation of the waveform for skipping execution of said one sub-module.
The inventive sound source apparatus has a software module used to compute samples of a waveform for generating a musical tone. In the inventive apparatus, a provider device variably provides a trigger signal at a relatively slow rate to define a frame period between successive trigger signals, and periodically provides a sampling signal at a relatively fast rate such that a plurality of sampling signals occur within one frame period. The processor device is resettable in response to each trigger signal and is operable based on each sampling signal to periodically execute the software module for successively computing a number of samples of the waveform within one frame. The detector device detects a load of computation imposed on the processor device during the course of generating the musical tone. The controller device is operative according to the detected load for varying the frame period to adjust the number of the samples computed within one frame period. A converter device composed of CODEC 14 is responsive to each sampling signal for converting each of the samples into a corresponding analog signal to thereby generate the musical tones.
The following explains the operations of the present invention in detail with reference to flowcharts.
There are five types of triggers for commencing the task switching. If supply of a MIDI message from an application program or the like via the sound source API (MIDI API) is detected, it indicates trigger 1. In this case, the software sound source module SSM is started in step SS25 to perform MIDI processing. If an internal interrupt has been caused by a software timer (tim) that outputs the interrupt every frame period, it indicates trigger 2. In this case, the software sound source module SSM is started in step SS26 to perform waveform computation processing, thereby generating tone waveform data for the predetermined number of samples. If a transfer request for tone waveform data has been made by an output device (CODEC) based on DAM, it indicates trigger 3. In this case, transfer processing is performed in step SS27 in which the tone waveform data is transferred from the waveform output buffer WAVEBUF to the output device. If an operation event based on manual operation of the input operator device such as the mouse or the keyboard of the processing apparatus has been detected, it indicates trigger 4. In the case of the operation event for timbre setting, timbre setting processing is performed in step SS28. For other operation events, corresponding processings are performed in step SS29. If the end of the operation has been detected, it indicates trigger 5. In this case, end processing is performed in step S30. If no trigger has been detected, trigger 4 is assumed and the processing of steps SS28 and SS29 is performed. When the processing of trigger 1 to trigger 5 has been completed, the SSM returns control to step SS21. The processing operations of steps SS21 through SS30 are repeated cyclically.
If the MIDI event is found not a note-on event in step SS41, it is determined in step SS45 whether the MIDI event is a note-off event. If the MIDI event is found a note-off event, it is determined in step SS46 whether the sound channel (MIDI CH) assigned to the note-off event belongs to the physical model sound source. If the sound channel assigned to the note-off event is found in the physical model sound source, the key-on flag VAKEYON in the physical model sound source is set to “0” in step SS47, and the occurrence time of the note-off event is stored in the MIDI event time buffer TM, upon which control is returned. If the sound channel assigned to the note-off event is not found in the physical model sound source, the key-off processing of another sound source is performed in step SS48, upon which control is returned.
Further, if the MIDI event is found not a key-off event in step SS45, it is determined in step SS49 whether the MIDI event is a program change. If the MIDI event is found the program change, it is determined in step SS50 whether the sound channel (MIDI CH) assigned to the MIDI event of program change belongs to the physical model sound source. If the sound channel assigned to the MIDI event of program change is found in the physical model sound source, the tone control parameters VATONEPAR designated in the program change are stored in step SS51, upon which control is returned. If the sound channel assigned to the MIDI event of program change is not found in the physical model sound source, the timbre parameter processing corresponding to that sound channel is performed in step SS52, upon which control is returned. If the MIDI event is not a program change in step SS49, the processing of the corresponding MIDI event is performed in step SS53, upon which control is returned. In this MIDI event processing, the processing for a breath controller operation is performed, for example.
If the MIDI event is found not a breath control event, step SS67 is skipped, and, in step SS68, it is determined whether the MIDI event is a pitch bend event. If the MIDI event is found a pitch bend event, it is determined in step SS69 whether the embouchure mode is set. If the embouchure mode is set, the parameter PITCHBEND in the pitch bend event is stored in the embouchure buffer EMBBUF in step SS70. If the embouchure mode is not set, the parameter PITCHBEND in the pitch bend event is stored in the pitch bend buffer PBBUF in step SS72.
Further, if it is found that the sound channel does not belong to the physical model sound source in step SS65 and if the MIDI event is found not a pitch bend event in step SS68, control is passed to step SS71, in which it is assumed that the received MIDI event does not correspond to any of the above-mentioned events, then processing corresponding to the received event is performed, and control is returned. It should be noted that the embouchure signal indicates a pressure with which the player mouths the mouthpiece. Since the pitch varies based on this embouchure signal, the parameter PITCHBEND is stored in the embouchure buffer EMBBUF in the embouchure mode. As described, every time a MIDI event is received, the parameters associated with music performance are updated by the MIDI event processing.
In step SS78, the sampling frequency FS specified by the tone control parameter VATONEPAR for the selected physical model sound source is set as the operation sampling frequency SAMPFREQ. Further, in step SS79, alarm clear processing is performed. In step SS80, the tone control parameters VATONEPAR containing to the parameter SAMPFREQ and the parameter VAKC are read to be stored in the buffer VAPARBUF, upon which control is returned. In this case, the tone control parameters VATONEPAR considering the parameter VAVEL may be stored in the buffer VAPARBUF.
If the load of the CPU 1 is found heavy in step SS76, it is determined in step SS81 whether the frame time automatic change mode is set. If this mode is set, a value obtained by multiplying the standard frame period TIMDEF by integer α is set as the period tim of the software timer in step SS82. Integer α is set to a value higher than one. When the frame period is extended, the frequency at which parameters are loaded into the physical model sound source can be lowered, thereby reducing the number of processing operations for transferring the changed data and the number of computational operations involved in the data updating.
In step SS83, the current operation sampling frequency SAMPFREQ is checked. If the operation sampling frequency SAMPFREQ is the sampling frequency FS1, it indicates that the load of the CPU 1 is heavy, so that the sampling frequency FS2 which is ½ of FS1 is set as the operation sampling frequency SAMPFREQ in step SS84. Then, the processing operations of step SS79 and subsequent steps are performed. In this case, a new tone control parameter VATONEPAR corresponding to the changed parameter SAMPFREQ is read and stored in the buffer VAPARBUF.
In step SS83, if the operation sampling frequency SAMPFREQ is found not the standard sampling frequency FS1, alarm display processing is performed in step SS85. This is because the current operation sampling frequency SAMPFREQ is already 1/n times FS1. Although the sampling frequency FS2 that should comparatively reduce the load of the CPU 1 is already set, the load of the CPU 1 has been found heavy. This may disable the normal waveform generation processing in the physical model sound source. If the physical model sound source is found sounding in step SS86, the physical model sound source is muted and the processing of step SS80 is performed.
The above-mentioned processing operations cause the tone control parameters VATONEPAR necessary for the physical model sound source to generate the waveform data which are stored in the buffer VAPARBUF. This allows the generation of waveforms by computation. In this waveform generation processing, the operation sampling frequency is dynamically changed depending on the load of the CPU 1. Flowcharts for this waveform generation processing of the physical model sound source are shown in
Then, in step SS92, the load state of the CPU 1 is checked. This check is performed by considering the occupation ratio of the waveform computation time in one frame period in the preceding frame. If this check indicates in step SS93 that the load of the CPU 1 is not heavy, the sampling frequency FS in the selected tone control parameters VATONEPAR is set as the operation sampling frequency SAMPFREQ in step SS94. If the check indicates that the load of the CPU 1 is heavy, it is determined in step SS105 whether the operation sampling frequency SAMPFREQ can be lowered. If it is found that the operation sampling frequency SAMPFREQ can be lowered, the same is actually lowered in step SS106 to 1/n, providing the sampling frequency FS2. If the sampling frequency is already FS2, and therefore the operation sampling frequency SAMPFREQ cannot be lowered any more, alarm display is performed in step SS107. This is because the operation sampling frequency SAMPFREQ is already set to 1/n times FS1. Although the sampling frequency is already set to the sampling frequency FS2 that should comparatively lower the load of the CPU 1, the actual load of the CPU 1 is found yet heavy. In this case, the necessary computation amount cannot be provided in one frame time or a predetermined time. Then, if the physical model sound source is found sounding in step SS108, the sound channel is muted, upon which control is returned.
When the processing of step SS94 or step SS106 comes to an end, alarm clear processing is performed in step SS95. Then, in step SS96, it is determined whether the operation sampling frequency SAMPFREQ has been changed. If the operation sampling frequency SAMPFREQ is found changed, the parameter change processing due to the operation sampling frequency change is performed in step SS97. Namely, the tone control parameter VATONEPAR corresponding to the operation sampling frequency SAMPFREQ is read and stored in the buffer VAPARBUF. If the change processing is found not performed, step SS97 is skipped.
In step SS98, it is determined whether truncate processing is to be performed. This truncate processing is provided for monotone specifications. In the truncate processing, a tone being sounded is muted and a following tone is started. If a truncate flag VATRUNCATE is set to “1”, the decision is YES and the truncate processing is started. Namely, in step SS99, the signal P for breath pressure or bow velocity and the signal E for embouchure or bow pressure are set to “0”. In step SS100, envelope dump processing is performed. This dump processing is performed by controlling the EG PAR to be supplied to the envelope controller. In step SS101, it is determined whether the envelope dump processing has ended. If this dump processing is found ended, the delay amount set to the delay circuit in the loop is set to “0” in step SS102. This terminates the processing for muting the sounding tone.
Then, in step SS109 shown in
In step SS111, it is determined whether the waveform computation for the number of samples calculated in step SS91 has ended. If the computation is found not ended, control is passed to step SS113, in which the time occupied by computation by the CPU 1 in one frame time or a predetermined time is checked. If this check indicates that the occupation time does not exceed the one frame time, next sample computation processing is performed in step SS110. The processing operations of steps SS110, SS111, SS113, and SS114 are cyclically performed until the predetermined number of samples is obtained as long as the occupation time does not exceed the one frame time. Consequently, it is determined in step SS111 that the computation of the predetermined number of samples in one frame has ended. Then, in step SS112, the tone waveform data stored in the waveform output buffer WAVEBUF is passed to the output device (the CODEC).
If it is determined in step SS114 that one frame time has lapsed before the predetermined number of samples has been computed, then, in step SS115, the muting processing of the tone waveform data in the waveform output buffer WAVEBUF is performed. Next, in step SS112, the tone waveform data stored in the waveform output buffer WAVEBUF is passed to the output device (the CODEC). If, in step SS90, the key-on flag VAKEYON is found not to set “1”, it is determined in step SS103 whether key-off processing is on. If the decision is YES, the key-off processing is performed in step SS104. If the key-off processing is found not on, control is returned immediately.
According to the invention, the tone generating method uses a hardware processor in the form of the CPU 1 and a software module in the form of the sound source module SSM to compute samples of a waveform in response to a sampling frequency for generating a musical tone according to performance information. The inventive method comprises the steps of periodically operating the hardware processor to execute the software module for successively computing samples of the waveform corresponding to a variable sampling frequency so as to generate the musical tone, detecting a load of computation imposed on the hardware processor during the course of generating the musical tone, and changing the variable sampling frequency according to the detected load to adjust a rate of computation of the samples. Preferably, the step of changing provides a fast sampling frequency when the detected load is relatively light, and provides a slow sampling frequency when the detected load is relatively heavy such that the rate of the computation of the samples is reduced by 1/n where n denotes an integer number.
The inventive method uses a hardware processor having a software module used to compute samples of a waveform for generating a musical tone. The inventive method comprises the steps of variably providing a trigger signal at a relatively slow rate to define a frame period between successive trigger signals, periodically providing a sampling signal at a relatively fast rate such that a plurality of sampling signals occur within one frame period, operating the hardware processor resettable in response to each trigger signal and operable in response to each sampling signal to periodically execute the software module for successively computing a number of samples of the waveform within one frame, detecting a load of computation imposed on the software processor during the course of generating the musical tone, varying the frame period according to the detected load to adjust the number of the samples computed within one frame period, and converting each of the samples into a corresponding analog signal in response to each sampling signal to thereby generate the musical tones.
Meanwhile, in order to build the physical model sound source in which the sampling frequency is variable, a delay device is required in which the sampling frequency is variable while a delay time can be set without restriction from the sampling frequency. The following explains such a delay device with reference to FIG. 38. In the physical model sound source, each delay circuit uses a delay area in the RAM 3 as a shift register to obtain a predetermined delay amount. A DELAY×20 shown in
In this case, a total delay amount of the delay outputs of an adder AD20 in the DELAY×20 becomes (D+d) equivalent to the number of delay stages. In the equivalent of time, the total delay amount becomes (D+d)/FS for the sampling frequency FS. If the maximum value among the sampling frequencies is FS1, then it is desired to constitute the delay such that the periodic time of the sampling frequency FS1 basically corresponds to one stage of the delay circuit. In such a constitution, in order to lower the sampling frequency to 1/n of the FS1, one sample obtained by the computation may be written to n continuous stages of the delay circuit at n continuous addresses for each sample computation. On the other hand, the delay outputs may be read by updating the read pointer by n addresses. Therefore, in the above-mentioned constitution, the equivalent value of the number of delay stages (D+d) for implementing necessary delay time Td is (D+d)=Td times FS1 regardless of the sampling frequency. It should be noted that the write pointer and the read pointer are adapted to equivalently shift in the address direction indicated by arrow on the shift register. When the pointers reach the right end of the shift register, the pointers jump to the left end, thus circulating on the DELAY×20.
As described, since the delay time length of the time equivalent of one stage of delay is made constant (1/FS1) regardless of the sampling frequency FS, the write pointer is set to write one sample of the waveform data over continuous n addresses to maintain the delay time length of the delay output even if the sampling frequency FS is changed to the sampling frequency FS2 which is 1/n of FS1. Every time one sample of the waveform data is generated, the write pointer is incremented by n addresses. The read pointer is updated in units of n addresses (n−1) at once to read the sample delayed by address skipping. This constitution allows the delay output the one sample of the generated waveform data to correspond to the delay output read from the address location before n addresses. Therefore, for the decimal fraction delay part shown in
Also, in a unit delay means provided for a filter and so on in the physical model sound source, a means generally similar to the above-mentioned delay circuit is used to prevent the delay time length from being changed even if the preset sampling frequency is changed. The following explains this unit delay means with reference to FIG. 39. The unit delay means also uses the delay area in the RAM 3 as a shift register. A DELAY×21 shown in
As described with the delay circuit shown in
The inventive sound source apparatus has a software module used to compute samples of a waveform in response to a sampling frequency for generating a musical tone according to performance information. In the inventive apparatus, a processor device responds to a variable sampling frequency to periodically execute the software module for successively computing samples of the waveform so as to generate the musical tone. A detector device detects a load of computation imposed on the processor device during the course of generating the musical tone. A controller device operates according to the detected load for changing the variable sampling frequency to adjust a rate of computation of the samples. The controller device provides a fast sampling frequency when the detected load is relatively light, and provides a slow sampling frequency when the detected load is relatively heavy such that the rate of the computation of the samples is reduced by 1/n where n denotes an integer number. The processor device includes a delay device having a memory for imparting a delay to the waveform to determine a pitch of the musical tone according to the performance information. The delay device generates a write pointer for successively writing the samples into addresses of the memory and a read pointer for successively reading the samples from addresses of the memory to thereby create the delay corresponding to an address gap between the write pointer and the read pointer. The delay device is responsive to the fast sampling frequency to increment both of the write pointer and the read pointer by one address for one sample. Otherwise, the delay device is responsive to the slow sampling frequency to increment the write pointer by one address n times for one sample and to increment the read pointer by n addresses for one sample.
The reproduction sampling frequency of the CODEC 14 is generally fixed as described before. If the sampling frequency of the waveform data generated by computation is changed to 1/n, one sample of the generated tone waveform data is repeatedly written, in units of n pieces, to the continuous address locations in the waveform output buffer of the RAM 3. Consequently, in the present embodiment, a series of the waveform data for one frame is written into the waveform output buffer WAVEBUF in the manner corresponding to the sampling frequency FS1. The CODEC 14 operates at the sampling frequency FS1. The CODEC 14 may receive the contents of the waveform output buffer WAVEBUF without change, and may perform DA conversion on the received contents at the sampling frequency FS1. If the reproduction sampling frequency of the CODEC 14 is synchronously varied with the sampling frequency of the waveform data to be generated, the generated waveform data may be written, a sample by sample, to the waveform output buffer WAVEBUF in the RAM 3.
In the waveform generation processing shown in
Another example of the arrangement of the parameters is shown in FIG. 40B. In this example, each piece of the timbre data for each sampling frequency FS that can be set is prepared for the same tone control parameter VATONEPARi. Namely, for VATONEPAR1(FS1, FS2) through VATONEPARm(FS1, FS2), the parameters having the same timbre for each of the sampling frequencies FS1 and FS2 are prepared all in one tone control parameter VATONEPARi. In this case, the timbre parameter corresponding to the sampling frequency FS is extracted from one tone control parameter VATONEPARi, and the extracted parameter is stored in the buffer VAPARBUF. The tone control parameters having the voice numbers subsequent to VATONEPARm+1 are the tone control parameters having independent timbres corresponding to one of the sampling frequency FS1 and the sampling frequency FS2. Namely, VATONEPARm+1(FS1, *) corresponds only to the sampling frequency FS1. VATONEPARp(*, FS2) corresponds only to the sampling frequency FS2. In order to prevent changing of the sampling frequency from affecting uniqueness of the tone in terms of auditory sensation, the parameters to be adjusted according to the changed sampling frequency include the delay parameters of the delay loop section, the filter coefficients, and the nonlinear characteristics of the nonlinear converter of the exciter.
In step SS123, computation of the timbre effector as shown in
When any of the above-mentioned computation skip conditions that is associated with the terminal filter FILTER-R has been satisfied, the decision is made YES in step SS132. Then, in step SS133, processing for passing the output value corresponding to the satisfied condition is performed. If the computation skip condition associated with the terminal filter FILTER-R is found not satisfied, the computation associated with the terminal filter FILTER-R is performed in step SS137. When the processing in step SS133 or SS137 has been completed, it is determined in step SS134 whether the computation skip condition associated with the multiplication coefficient TERMGR is satisfied. If this condition is found satisfied, the decision is YES. Then, in step SS135, the processing for passing the output value corresponding to the satisfied condition is performed. If the condition is found not satisfied, computation for multiplying the multiplication coefficient TERMGR in the multiplier MU8 is performed in step SS138. When the processing of step SS135 or SS138 has been completed, computation processing of the remaining delay loop portions is performed in step SS136, upon which control is returned.
Computation may be skipped not only with the delay loop but also with the exciter or the timbre effector. For the exciter, whether the computation is to be skipped or not is determined by checking the signal amplitude of the signal path and the associated parameters if the values of the amplitude and the parameters are nearly zero. For the timbre effector, when the output of the envelope controller EL, the resonator model section RE, or the effector EF has been sufficiently attenuated to nearly zero, the computation for each block of which output is nearly zero may be skipped to make the output value zero. In the second embodiment described so far, control of changing the sampling frequency FS may cause an aliening noise depending on the nonlinear conversion characteristics in the nonlinear section. This problem may be overcome by performing over-sampling on the input side of the nonlinear conversion and by band-limiting the obtained nonlinear conversion output by a filter to return the sampling frequency to the original sampling frequency.
If a new key-on occurs during the current key-on state in the physical model sound source shown in
Clearing the delay area in the RAM 3 is realized by writing data “0” to that area, so that the music tone generation is unnaturally delayed by the time of clearing.
The following explains the operation of the delay circuit shown in
The following describes in detail the operation of the delay circuits shown in
The first delay system and the second delay system can be switched to each other n a toggle manner. Therefore, if the first delay system is in use for example when a new key-on occurs, the multiplication coefficient between the multiplication coefficient INPUTa and the multiplication coefficient OUTPUTa in the first delay system is changed from “1” to “0”. At the same time, the multiplication coefficient between the multiplication coefficient INPUTb and the multiplication coefficient OUTPUTb in the second delay system is changed from “0” to “1”. These changing operations allow the use of the delay means DELAYb in the second delay system. Thus, it is ready to generate the music tone corresponding to the new key-on. Because the multiplication coefficient in the first delay system is changed to “0”, data “0” is written to the delay means DELAYa of the first delay system in one period of music tone, thereby clearing this delay means.
The delay circuit shown in
The delay circuit shown in
In the delay circuit shown in
The above-mentioned delay circuits are implemented by software by using the delay area set in the RAM 3. This is schematically illustrated in FIG. 43. As shown in the figure, a predetermined area in the RAM 3 is assigned to the delay area. This delay area is divided into a plurality of delay areas to provide unit delay areas (DELAY1a, DELAY1b, . . . , DELAYA9, . . . , DELAYn) for constituting the delay means. These unit delay areas are allocated to the delay means (DELAY1, . . . , DELAYn). A flag area may be provided for each of these unit delay areas. A free flag may be set to this flag area, indicating that the unit delay area is not used as a delay means and hence free.
The following explains the allocation of the delay area for implementing the delay circuit shown in
Next, when the current key-on occurs, the unit delay area DELAY1b is allocated to the delay means of the second delay system of the first delay circuit DELAY1 for example, and the delay amount of the unit delay area DELAY1a is set to delay amount DLYk according to the pitch of the current key-on. By the current key-on, the unit delay area DELAYn is allocated to the delay means DELAYn of the second delay system of the nth delay circuit for example, and the delay amount of the unit delay area DELAYn is set to delay amount DLYk according to the pitch associated with the current key-on. This can perform the operation of the delay circuit shown in FIG. 41.
The constitution shown in
As described above, the inventive tone generating method uses a hardware processor having a software module used to compute samples of a waveform for generating a musical tone. The inventive method comprises the steps of periodically providing a trigger signal at a relatively slow rate to define a frame period between successive trigger signals, periodically providing a sampling signal at a relatively fast rate such that a plurality of sampling signals occur within one frame period, operating the hardware processor resettable in response to a trigger signal and operable in response to each sampling signal to periodically execute the software module for successively computing a number of samples of the waveform within one frame, and converting each of the samples into a corresponding analog signal in response to each sampling signal to thereby generate the musical tones. The step of operating includes delaying step using a pair of memory regions for imparting a delay to the waveform to determine a pitch of the musical tone according to the performance information. The delay step successively writes the samples of the waveform of one musical tone into addresses of one of the memory regions, and successively reads the samples from addresses of the same memory region to thereby create the delay. The delay step responds when the hardware processor is reset so that said one musical tone is switched to another musical tone for successively writing the samples of the waveform of said another musical tone into addresses of the other memory region and successively reading the samples from addresses of the same memory region to thereby create the delay while clearing the one memory region to prepare for a further musical tone.
Described so far is the software sound source that practices the second preferred embodiment of the invention on a personal computer. In the computer system, this sound source software can be handled as either application software or device drive software, for example. The way by which the sound source software is to be handled may be appropriately determined according to the system configuration or the operation system OS used.
The sound source software or the capabilities thereof may be incorporated in another software program such as amusement software, karaoke software, or automatic play and accompaniment software. Also this software may be directly incorporated in the operation system OS. The software according to the present invention can be supplied in a machine-readable disk media such as a floppy disk, a magneto-optical disk, and a CD-ROM or a memory card. Further, the software may be added by means of a semiconductor memory chip (typically ROM) which is inserted in a computer unit. Alternatively, the sound source software associated with the present invention may be distributed through the network I/F 11.
The above description has been made by using the application on a personal computer for example. Application to amusement equipment such as game and karaoke, electronic equipment, and general-purpose electrical equipment is also practical. In addition, application to a sound source board and a sound source unit is practical. Moreover, application to a sound source machine based on software processing using dedicated MPU (DS) is practical. In this case, if the processing capacity of the MPU is high, the sampling frequency can be raised, thereby multiplying the sampling frequency by n when high-precision waveform output is required. Further, when a plurality of sound channels are used on the sound source, variable control on the sampling frequency and skip control on the computation portion that can be skipped in the computation algorithm may be performed according to the number of channels being sounded. In this case, different sampling frequencies may be set to different performance parts or MIDI channels. Still further, in the above-mentioned embodiment, the sampling frequency of the CODEC is fixed. It will be apparent that this sampling frequency is variable. The sampling frequency is made variable by inserting the processing circuit for matching the sampling frequencies between the waveform output buffer WAVEBUF and the CODEC (DAC) by typically oversampling, downsampling, or data interpolation.
The present invention is applicable to a software sound source in which the CPU operates in synchronization with the sampling frequency to periodically execute the software module for successively computing waveform samples. For example, the CPU conducts an interrupt for computing one sample at a period of 1/(n×fs) where n denotes a number of tones and fs denotes a sampling frequency. Further, the invention is applicable to a hardware sound source using an LSI chip in order to reduce load of ALU and in order to use resources of LSI chip for other tasks than tone generation.
As described and according to the present invention, music tone waveform generating blocks indicated by a preset algorithm are assigned to selected sound channels, the assigned music tone waveform generating blocks are combined by the algorithm, and music tone waveform generating computation is performed to generate music tone waveform data. Consequently, the number of music waveform generating blocks for the sound channels may be arbitrarily changed before sounding assignment is made. This novel constitution allows, according to the capacity of a music waveform data generating means, flexible adjustment of the load state of the music waveform data generating means and the quality of the music waveform data to be generated.
The music tone waveform generating blocks indicated by an algorithm set according to the timbre of the music tone are assigned to the selected sound channels. The assigned music tone waveform generating blocks are combined by the algorithm to perform music tone waveform generating computation so as to generate the music tone waveform data.
Preferably, in setting timbres by a timbre setting means, if the number of music tone waveform generating blocks is set to a performance part concerned by a means for setting number of blocks, the timbre set to that performance part is changed to a timbre defined by music tone waveform generating blocks within that number of blocks. This novel constitution further enhances the above-mentioned effect.
Preferably, during the music tone waveform generating computation in the sound channel, the number of music tone waveform generating blocks assigned to that sound channel is changed according to a predetermined condition. Consequently, during sounding, the load state of the music tone waveform data generating means and the quality of the music waveform data to be generated may be changed flexibly according to the capacity of that music tone waveform generating means.
Further, according to the present invention, in a computer equipment which often executes a plurality of tasks such as word processing and network communication in addition to music performance, occurrence of troubles such as an interrupted music tone can be reduced when the CPU power is allocated to the tasks not associated with music performance during processing of the software sound source. In other words, more tasks can be undertaken during the execution of sound source processing.
Since the present invention is constituted as described above, when the CPU load is high, the sampling frequency can be lowered, thereby generating tone waveform data that prevents the interruption of a music tone. When the CPU load is low, a higher sampling frequency than the normal sampling frequency can be used, thereby generating high-precision tone waveform data. In this case, the number of sound channels may be changed instead of changing the sampling frequency.
If a particular condition is satisfied, corresponding computational operations are skipped, so that efficient computation can be performed, thereby preventing the CPU load from getting extremely high. Consequently, the tone waveform data can be generated that prevents the sounding of a music tone from being interrupted. Further, the efficient computation allows the use of the higher sampling frequency than the conventional sampling frequency, resulting in high-precision tone waveform data.
While the preferred embodiments of the present invention have been described using specific terms, such description is for illustrative purposes only, and it is to be understood that changes and variations may be made without departing from the spirit or scope of the appended claims.
Number | Date | Country | Kind |
---|---|---|---|
8-246942 | Aug 1996 | JP | national |
8-248592 | Aug 1996 | JP | national |
9-017333 | Jan 1997 | JP | national |
Number | Name | Date | Kind |
---|---|---|---|
4554857 | Nishimoto | Nov 1985 | A |
5040448 | Matsubara et al. | Aug 1991 | A |
5200564 | Usami et al. | Apr 1993 | A |
5220117 | Yamada et al. | Jun 1993 | A |
5376752 | Limberis et al. | Dec 1994 | A |
5410099 | Kosugi | Apr 1995 | A |
5432293 | Nonaka et al. | Jul 1995 | A |
5677504 | Kurata | Oct 1997 | A |
5696342 | Shimizu | Dec 1997 | A |
5698806 | Yamada et al. | Dec 1997 | A |
5714703 | Wachi et al. | Feb 1998 | A |
5767430 | Yamanoue et al. | Jun 1998 | A |
Number | Date | Country |
---|---|---|
0 337 458 | Oct 1989 | EP |
0 747 877 | Dec 1996 | EP |
2179697 | Jul 1990 | JP |
05-249970 | Sep 1993 | JP |
05-297876 | Nov 1993 | JP |
06-097770 | Apr 1994 | JP |
07-271378 | Oct 1995 | JP |
7-122796 | Dec 1995 | JP |
07-319471 | Dec 1995 | JP |
9618995 | Jun 1996 | WO |
Number | Date | Country | |
---|---|---|---|
Parent | 08920947 | Aug 1997 | US |
Child | 09976769 | US |