1. Technical Field of the Invention
The present invention relates to a technology for synthesizing a sound.
2. Description of the Related Art
A technology has been proposed for synthesizing a desired sound using sound data representing features of sounds that were previously recorded. For example, Patent Reference 1 or Patent Reference 2 describes a technology in which a frequency spectrum specified from sound data is expanded or contracted along the frequency axis according to a desired pitch, and an envelope of the expanded or contracted frequency spectrum is adjusted to synthesize a desired sound.
However, the technology of Patent Reference 1 or Patent Reference 2 synthesizes a sound that would be received at a sound collecting point (i.e., at the mounting position of a sound collecting device) where sounds used to generate the sound data were recorded. Thus, the technology cannot synthesize a sound that would be heard at a position which the user designates inside a space in which sounds were recorded.
The invention has been made in view of these circumstances, and it is an object of the invention to generate a sound that would be heard at a position desired by the user inside a space in which sounds used to generate sound data were recorded.
In order to achieve the above object, a sound synthesizer according to the invention includes a storage that stores a plurality of sound data respectively representing a plurality of sounds collected by different sound collecting points corresponding to the plurality of the sound data, a setting unit that variably sets a position of a sound receiving point according to an instruction from a user, and a sound synthesis unit that synthesizes a sound by processing each of the plurality of the sound data according to a relation between a position of the sound collecting point corresponding to the sound data (for example, a corresponding one of the positions P[1] to P[N] in
According to this configuration, it is possible to generate a sound that would be heard at a position (i.e., a virtual sound receiving point) desired by the user inside an environment in which sounds used to generate sound data were recorded, since a sound is synthesized by processing each of the plurality of the sound data according to a relation between the position of the sound collecting point corresponding to the sound data and the position of the sound receiving point indicated by the user.
In a preferable embodiment of the invention, the sound synthesis unit synthesizes a sound by processing each of the plurality of the sound data according to a distance (for example, a corresponding one of the distances L[1] to L[N] in
In a preferable embodiment of the invention, the setting unit variably sets a directionality attribute (for example, a directionality mode tU or a sound receiving direction dU) of the sound receiving point according to an instruction from a user, and the sound synthesis unit synthesizes a sound by processing each of the plurality of the sound data according to sensitivity that the directionality attribute represents for a direction of the sound collecting point corresponding to the sound data from the sound receiving point.
According to this embodiment, it is possible to synthesize a sound more precisely closer to sounds inside the environment in which sounds used to generate the sound data were recorded, since changes of sounds according to the direction of the sound receiving point from each sound collecting point are reflected in the synthesized sound. In this embodiment, for example, the setting unit sets at least one of a sound receiving direction and a directionality type (for example, the directionality mode tU in
In a preferable embodiment of the invention, the sound synthesis unit weights an envelope of a frequency spectrum of a sound represented by each of the plurality of the sound data by a factor (for example, a corresponding one of the weights W[1] to W[N] in
In this embodiment, the relation between the position of each sound collecting point and the position of the sound receiving point is reflected in the envelope of the synthesized sound. However, the synthesis method that the sound synthesis unit uses to synthesize a sound or the details of processing performed on the sound data are diverse in the invention.
The sound synthesizer according to each of the above embodiments may not only be implemented by hardware (electronic circuitry) such as a Digital Signal Processor (DSP) dedicated to musical sound synthesis but may also be implemented through cooperation of a general arithmetic processing unit such as a Central Processing Unit (CPU) with a program. A program according to the invention causes a computer, including a storage that stores a plurality of sound data respectively representing a plurality of sounds collected by different sound collecting points corresponding to the plurality of the sound data, to perform a setting process to variably set a position of a sound receiving point according to an instruction from a user, and a sound synthesis process to synthesize a sound by processing each of the plurality of the sound data according to a relation between a position of the sound collecting point corresponding to the sound data and a position of the sound receiving point. The program achieves the same operations and advantages as those of the sound synthesizer according to each of the above embodiments. The program of the invention may be provided to a user through a machine readable recording medium storing the program and then be installed on a computer and may also be provided from a server device to a user through distribution over a communication network and then be installed on a computer.
The control device 10 is an arithmetic processing unit that executes a program stored in the storage device 12. The control device 10 of this embodiment functions as a plurality of elements such as an information generation unit 32, a display controller 34, a sound synthesis unit 42, and a setting unit 44 for generating a sound signal SOUT representing the waveform of a sound such as a sound of singing. The plurality of elements that the control device 10 implements may each be mounted in a distributed manner on a plurality of devices such as integrated circuits or may each be implemented by an electronic circuit such as a DSP dedicated to generating the sound signal SOUT.
The storage device 12 stores a program that is executed by the control device 10 and a variety of data that is used by the control device 10. Any known recording medium such as a semiconductor storage device or a magnetic storage device may be used as the storage device 12. The storage device 12 of this embodiment stores a sound data group G including N sound data D (or N pieces of sound data D) (D[1], [2], . . . , D[N]) where N is a natural number. The sound data D represents features of a sound that has been previously collected and stored. More specifically, the sound data D includes a plurality of sound element data DS (or a plurality of pieces of sound element data DS), each corresponding to an individual sound element. Each sound element data DS includes a frequency spectrum S of a sound element and an envelope E of the frequency spectrum S. The sound element is a phoneme, which is the smallest unit that can be aurally distinguished, or a phoneme chain which is a series of connected phonemes.
A sound collected by a sound collecting device M[i] disposed at a position P[i] (i=1−N) is used to generate sound data D[i]. Specifically, as shown in
The input device 22 in
The information generation unit 32 in the control device 10 generates or edits music information QA such as score data, which is used to synthesize a sound, according to an operation that the user performs on the input device 22 and then stores the music information QA in the storage device 12.
The display controller 34 in
When the user performs an operation for starting editing of the music information QA on the input device 22, the display controller 34 displays the music editing image of
Each time the user selects a designated sound, the information generation unit 32 stores a pitch and sound generation time indicated by the user, as a pitch and sound generation time of the designated sound in the music information QA, in the storage device 12. The user designates a lyric character of each indicator CA (i.e., each designated sound) in the work area 52 by appropriately operating the input device 22. The information generation unit 32 stores a sound element corresponding to the character, which the user has designated for the designated sound, in the music information QA in association with the designated sound.
The sound synthesis unit 42 of
When the user performs an operation for starting generation or editing of the sound receiving information QB on the input device 22, the display controller 34 displays the sound receiving setting image 60 of
The work area 62 is a region having a shape corresponding to the space R of
The user variably designates the directionality mode tU at the sound receiving point U (i.e., a directionality attribute of the virtual sound receiving device disposed at the position PU) through operation of the input device 22. For example, as shown in
In addition, the user also variably designates the sound receiving sensitivity hU at the sound receiving point U (i.e., the gain of the virtual sound receiving device disposed at the position PU) and the sound receiving direction dU at the sound receiving point U (i.e., a directionality attribute of the virtual sound receiving device disposed at the position PU) through operation of the input device 22. The display controller 34 rotates the directionality pattern CB to the sound receiving direction dU designated by the user as shown in
Each time the user operates an operator (Add) 642 in
When an operator (Delete) 643 is operated, the setting unit 44 deletes sound receiving information QB corresponding to the identifier in the region 641 from the storage device 12. When an operator (Play) 644 is operated, the sound synthesis unit 42 synthesizes a sound signal SOUT of a predetermined sound element using the sound receiving information QB that is being edited. The user can generate desired sound receiving information QB by editing the sound receiving information QB while listening to, as needed, the synthesized sound reproduced through the sound output device 26. On the other hand, when an operator (OK) 645 is selected, the sound receiving setting image 60 is removed after the sound receiving information QB that is being edited is fixed, and, when an operator (Cancel) 646 is operated, the sound receiving setting image 60 is removed without reflecting the setting performed after the immediately previous operation of the operator 642 in the sound receiving information QB.
The sound synthesis unit 42 in
The sound synthesis unit 42 sequentially performs a pitch conversion process and a magnitude adjustment process. The pitch conversion process is a process for expanding or contracting the frequency spectrum SA in the direction of the frequency axis. That is, the sound synthesis unit 42 calculates a conversion rate k by dividing a pitch PX that is designated for the selected designated sound in the music information QA by the fundamental frequency P0 of the frequency spectrum SA (i.e., k=Px/P0) and expands (when the conversion rate k is greater than “1”) or contracts (when the conversion rate k is less than “1”) the frequency spectrum SA in the direction of the frequency axis by a ratio corresponding to the conversion rate k to generate a frequency spectrum SB as shown in
The magnitude adjustment process is a process for adjusting the magnitude (i.e., amplitude) of the frequency spectrum SB that has been expanded or contracted to generate a frequency spectrum SC. The magnitude adjustment process uses the envelope EA generated by the adjustment unit 46. More specifically, the sound synthesis unit 42 generates the frequency spectrum SC by increasing or decreasing the magnitude of each local peak distribution A of the frequency spectrum SB such that a curve connecting each local peak pk of the frequency spectrum SB matches the envelope EA as shown in
The following is a detailed description of how the adjustment unit 46 calculates the envelope EA and the frequency spectrum SA. As shown in
VE(f)=W[1]·vE—1(f)+W[2]·vE—2(f)+ . . . +W[N]·vE—N(f) (1)
Similarly, the adjustment unit 46 calculates, as the frequency spectrum SA, a weighted sum of the frequency spectrums S[1] to S[N] represented by N sound element data DS[1] to DS[N] corresponding to the sound element of the selected designated sound in the sound data group G. More specifically, a magnitude VS(f) at each frequency f in the frequency spectrum SA is defined as the sum (i.e., a weighted sum) of the magnitudes vS_i(f) of the frequency f of frequency spectrums S[i] multiplied by weights W[i] for N envelopes S[1] to S[N] (i.e., for all i from 1 to N) as represented in the following Equation (2). That is, the adjustment unit 46 generates the frequency spectrum SA corresponding to the frequency spectrums S[1] to S[N] by performing calculation of the following Equation (2).
VS(f)=W[1]·vS—1(f)+W[2]·vS—2(f)+ . . . +W[N]vS—N(f) (2)
The weight W[i] applied to both the magnitude vE_i(f) of the envelope E[i] in Equation (1) and the magnitude vS_i(f) of the frequency spectrum S[i] in Equation (2) is determined according to the sound receiving information QB set by the setting unit 44 and the position P[i] designated in the sound data D[i] (i.e., the position of the sound collecting device M[i] at which the sound was recorded). More specifically, the weight W[i] is determined to be the product of a factor α[i] and a factor β[i] (W[i]=α[i]·β[i]). The factor α[i] is calculated according to the distance between the position P[i] and the position PU of the virtual sound receiving point U. The factor β[i] is calculated according to the direction of the position P[i] from the position PU and the directionality attributes of sound reception at the sound receiving point U such as the directionality mode tU, the sound receiving sensitivity hU, and the sound receiving direction dU. The adjustment unit 46 calculates the factor α[i] and the factor β[i] in the following manner.
First, a description is given of the calculation of the factor α[i]. As shown in
As can be understood from Equation (3), the factor α[i] increases as the position PU of the sound receiving point U and the position P[i] of the sound collecting device M[i] at which the sound was recorded get closer to each other (i.e., as the distance L[i] decreases). Accordingly, the influence of the sound element data DS[i] of the sound data D[i] (i.e., the influence of the envelope E[i] and the frequency spectrum S[i]) upon the envelope EA or the frequency spectrum SA generated by the adjustment unit 46 increases as the position P[i] at which the sound data D[i] is recorded gets closer to the sound receiving point U (i.e., the position PU) designated by the user.
Next, a description is given of the calculation of the factor β[i]. As shown in
The adjustment unit 46 then calculates a sensitivity r[i] of a sound wave that arrives at the sound receiving point U at the angle θ[i] using a sensitivity function corresponding to the directionality mode tU designated in the sound receiving information QB. The sensitivity function defines the sensitivity of a sound wave arriving at the sound receiving point U in each direction. For example, a sensitivity function of Equation (4A) is used when unidirectionality (i.e., cardioid) has been designated as the directionality mode tU, a sensitivity function of Equation (4B) is used when omnidirectionality has been designated as the directionality mode tU, and a sensitivity function of Equation (4C) is used when bidirectionality has been designated as the directionality mode tU.
r[i]=½·cos θ[i]+½ (4A)
r[i]=1 (4B)
r[i]=cos θ[i] (4C)
The adjustment unit 46 calculates, as the factor β[i], the product of the sound receiving sensitivity hU designated in the sound receiving information QB and the ratio of the sensitivity r[i] to the total sum of the sensitivities r[1] to r[N] calculated respectively for the N positions P[1] to P[N] as defined by the following Equation (5).
The factor β[i] increases as the sensitivity r[i] increases as can be understood from Equation (5). Accordingly, the influence of the sound element data DS[i] of the sound data D[i] (i.e., the influence of the envelope E[i] and the frequency spectrum S[i]) upon the envelope EA or the frequency spectrum SA generated by the adjustment unit 46 increases as the sensitivity of sound reception at the sound receiving point U (i.e., at the position PU) increases, for which the user has designated the directionality mode tU and the sound receiving direction dU, in the direction from the position P[i] at which the sound data D[i] was collected.
As described above, in this embodiment, the envelope E[i] or the frequency spectrum S[i] specified by the sound element data DS[i] is used to generate the envelope EA or the frequency spectrum SA after the envelope E[i] or the frequency spectrum S[i] is weighted according to relations (such as the distance L[i] and the angle θ[i]) between the position P[i] of the sound collecting point (i.e., the sound collecting device M[i]) in the space R and the position PU designated by the user. Accordingly, it is possible to synthesize a sound that would be received by a virtual sound receiving point U assuming that the virtual sound receiving point U was disposed at the position PU in the space R. In addition, since sound receiving attributes at the sound receiving point U such as the directionality mode tU, the sound receiving sensitivity hU, and the sound receiving direction dU are variably set according to an instruction from the user, this embodiment has an advantage in that it is possible to synthesize a sound that would be received by a sound receiving device having characteristics desired by the user when the sound receiving device is virtually disposed in the space R.
The following is a description of the second embodiment of the invention. In each of the following embodiments, the same elements as those of the first embodiment are denoted by the same reference numerals and a detailed description thereof is appropriately omitted.
For each of the K sound receiving points U, the adjustment unit 46 generates an envelope EA and a frequency spectrum SA according to variables corresponding to the sound receiving point U in the sound receiving information QB using the same method as that of the first embodiment. For each of the K sound receiving points U, the sound synthesis unit 42 generates a sound signal SOUT according to the envelope EA and the frequency spectrum SA that the adjustment unit 46 has calculated for the sound receiving point U using the same method as that of the first embodiment. The K sound signals SOUT generated in this manner are output to the sound output device 26 after being mixed together through the sound synthesis unit 42. In addition to the same advantages as those of the first embodiment, this embodiment has an advantage in that it is possible to synthesize sounds that would be received by a plurality of sound receiving points U in the space R.
As shown in
Various modifications can be made to each of the above embodiments. The following are specific examples of such modifications. It is also possible to optimally select and combine two or more from the above embodiments or the following modifications.
(1) Modification 1
Although each of the above embodiments has been exemplified by the case where a plurality of persons u generate vocal sounds in the space R when a sound data group G is generated (i.e., the case where a sound data group G of choral sounds is generated), it is also preferable to employ a configuration wherein a sound data group G is generated from a (solo) vocal sound generated by one person u. Although a human vocal sound is collected to generate sound data D (sound data D0 in the third embodiment) in each of the above embodiments, it is also possible to employ a configuration wherein the sound data D (D0) represents a sound played by an instrument.
(2) Modification 2
Although each of the above embodiments has been exemplified by the case where sound collecting points (sound collecting devices M[i]) are disposed in plane (i.e., in two dimensions) in the space R, each of the above embodiments is applied in the same manner to the case where sound collecting points (sound collecting devices M[i]) are disposed in three dimensions in the space R. In the case where sound collecting points (sound collecting devices M[i]) are disposed in three dimensions, each position P[i] is defined by 3-dimensional coordinates in an x-y-z space R.
(3) Modification 3
The sound synthesis unit 42 may use any known technology to synthesize a sound. A method for reflecting the sound receiving information QB in the synthesized sound is appropriately selected according to the synthesis method used by the sound synthesis unit 42 (specifically, according to variables used for synthesis). In addition, although sound receiving information QB (specifically, weights W[1] to W[N]) is reflected in both the envelopes E[1] to E[N] and the frequency spectrums S[1] to S[N] in each of the above embodiments, it is also possible to employ, for example, a configuration wherein the envelope EA is generated according to the sound receiving information QB using the method of
(4) Modification 4
The contents of the sound receiving information QB are changed appropriately from the above examples. For example, at least one of the directionality mode tU, the sound receiving sensitivity hU, and the sound receiving direction dU is omitted. Only one type of sensitivity function is applied to calculate the factor β[i] in a configuration wherein the directionality mode tU is omitted and the variable hU of Equation (5) is set to a predetermined value (for example, “1”) in a configuration wherein the sound receiving sensitivity hU is omitted. It is also preferable to employ a configuration wherein the calculation of Equation (1) or (2) is performed using only one of the factors α[i] and β[i] as the weight W[i]. As understood from the above examples, the invention preferably employs a configuration wherein a sound is synthesized by processing each of the plurality of the sound data D (D[1] to D[N]) according to the relation (such as the distance L[i] or the angle θ[i]) between the position PU of the sound receiving point U and the sound collecting position P[i] corresponding to the sound data D[i].
(5) Modification 5
The contents of the sound element data DS are not limited to the above examples such as the frequency spectrum S and the envelope E. For example, it is also possible to employ a configuration wherein the sound element data DS represents a waveform of the sound element on the time axis. In the case where the sound element data DS represents the waveform of the sound element, the sound synthesis unit 42 uses, for example, the sound element data DS to synthesize the sound after calculating the frequency spectrum S or the envelope E by performing frequency analysis including discrete Fourier transform on the sound element data DS.
Number | Date | Country | Kind |
---|---|---|---|
2008-152772 | Jun 2008 | JP | national |
Number | Name | Date | Kind |
---|---|---|---|
5536902 | Serra et al. | Jul 1996 | A |
5636283 | Hill et al. | Jun 1997 | A |
5880392 | Wessel et al. | Mar 1999 | A |
5999630 | Iwamatsu | Dec 1999 | A |
6239348 | Metcalf | May 2001 | B1 |
6444892 | Metcalf | Sep 2002 | B1 |
6740805 | Metcalf | May 2004 | B2 |
6992245 | Kenmochi et al. | Jan 2006 | B2 |
7138575 | Childs et al. | Nov 2006 | B2 |
7138576 | Metcalf | Nov 2006 | B2 |
7369663 | Sekine | May 2008 | B2 |
7511213 | Childs et al. | Mar 2009 | B2 |
7572971 | Metcalf | Aug 2009 | B2 |
7636448 | Metcalf | Dec 2009 | B2 |
20020029686 | Metcalf | Mar 2002 | A1 |
20030029306 | Metcalf | Feb 2003 | A1 |
20030202667 | Sekine | Oct 2003 | A1 |
20050223877 | Metcalf | Oct 2005 | A1 |
20070056434 | Metcalf | Mar 2007 | A1 |
20080056517 | Algazi et al. | Mar 2008 | A1 |
20080247552 | Herzog | Oct 2008 | A1 |
Number | Date | Country |
---|---|---|
2003-099078 | Apr 2003 | JP |
2003-255998 | Sep 2003 | JP |
2007-240564 | Sep 2007 | JP |
WO-2005036523 | Apr 2005 | WO |
WO-2007028922 | Mar 2007 | WO |
Number | Date | Country | |
---|---|---|---|
20090308230 A1 | Dec 2009 | US |