The present invention relates to a reproduction device, to a reproduction method, to a reproduction program, and to a recording medium upon which that reproduction program is recorded.
From the past, reproduction devices that reproduce the contents of musical pieces have become widespread. Among reproduction devices of this type there are some reproduction devices that, in order to join a preceding musical piece that is being reproduced first to a succeeding musical piece that is to be reproduced after the preceding musical piece without any junction or seam becoming audibly conspicuous, execute cross-fade reproduction by, while performing fade-out processing in which the audio volume of the final portion of the preceding musical piece is gradually reduced, also performing fade-in processing in which the audio volume of the initial portion of the succeeding musical piece is gradually increased.
In such cross-fade reproduction of musical pieces, the final portion of the preceding musical piece and the initial portion of the succeeding musical piece are reproduced simultaneously. Due to this sometimes it may happen that, during the so-called cross-fade interval, a sense of auditory discomfort can occur because of differences in the characteristics of these portions of the musical pieces that are being reproduced simultaneously. Accordingly techniques have been proposed for suppressing the occurrence of any sense of auditory discomfort during such cross-fade reproduction.
In one such proposed technique (refer to Patent Document #1, hereinafter referred to as the “Prior Art Example”), the audio is divided up into audio components for various frequency bands, and cross-fade reproduction is performed for each audio component of each frequency band. With this technique of the Prior Art Example, in order to suppress the occurrence of any sense of auditory discomfort due to the beat pattern of the preceding musical piece and the beat pattern of the succeeding musical piece during the cross-fade interval, when performing the fade-out processing of the preceding musical piece and the fade-in processing of the succeeding musical piece, it is arranged to make the total level of the low frequency region components of the preceding musical piece and of the succeeding musical piece be smaller than the total level of the medium frequency region components of the preceding musical piece and of the succeeding musical piece, and than the total level of the high frequency region components of the preceding musical piece and of the succeeding musical piece.
Patent Document #1: Japanese Laid-Open Patent Publication 2008-27552.
Now, let it be supposed that, for example, the preceding musical piece and the succeeding musical piece are stereo musical pieces. And let it be supposed that the preceding musical piece is a musical piece in which guitar playing is dominant, while the succeeding musical piece is a musical piece in which bass playing is dominant. In this case, the audio component relating to the centrally localized region of the preceding musical piece becomes guitar playing while the audio component relating to the centrally localized region of the succeeding musical piece becomes bass playing. And, at this time, the tonal characteristics of the centrally localized audio component of the preceding musical piece and the tonal characteristics of the centrally localized audio component of the succeeding musical piece are different. When cross-fade reproduction to which the technique of the prior art example described above is applied is performed a preceding musical piece and for a succeeding musical piece of this type, sometimes it may happen that, as the centrally localized audio component, both the guitar playing and the bass playing may be heard in the cross-fade interval, so that the musical harmony may be deteriorated. As a result, when performing cross-fade reproduction of musical pieces having a plurality of channels such as stereo musical pieces or surround-sound musical pieces or the like by employing the technique of the prior art example, there is a possibility that a sense of auditory discomfort may occur during the cross-fade interval.
Due to this, a technique is desired that, when cross-fade reproduction is performed for a musical piece having a plurality of channels, is capable of suppressing the occurrence of a sense of auditory discomfort during the cross-fade interval. To respond to this requirement is one of the problems that the present invention is intended to solve.
The invention described in Claim 1 is a reproduction device that performs cross-fade reproduction of a preceding musical piece and a succeeding musical piece, comprising: an extraction unit that extracts audio components from each of the sounds of said preceding musical piece and said succeeding musical piece, corresponding to a plurality of localized regions that are determined in advance; and a cross-fade processing unit that executes cross-fade processing in which, for each of said localized regions, fade-out processing and fade-in processing are performed for a pair of corresponding audio components of said preceding musical piece and said succeeding musical piece.
Furthermore, the invention described in Claim 11 is a reproduction method employed by a reproduction device that comprises an extraction unit and a cross-fade processing unit, and that performs cross-fade reproduction of a preceding musical piece and a succeeding musical piece, comprising the steps of: an extracting step in which audio components are extracted from the sounds of each of said preceding musical piece and said succeeding musical piece, by said extraction unit corresponding to a plurality of localized regions that are determined in advance; and a cross-fade processing step of executing cross-fade processing in which, for each of said localized regions, fade-out processing and fade-in processing are performed by said cross-fade processing unit for a pair of corresponding audio components of said preceding musical piece and said succeeding musical piece.
Moreover, the invention described in Claim 12 is a reproduction program, wherein it causes a computer included in a reproduction device to execute a reproduction method according to Claim 11.
Yet further, the invention described in Claim 13 is a recording medium, wherein a reproduction program according to Claim 12 is recorded thereupon in a form that can be read by a computer in a reproduction device.
100A, 100B: reproduction devices
131, 132: extraction units
133: cross-fade processing unit
191B: storage unit
192B: reproduction control unit (generation unit, determination unit)
Embodiments of the present invention will now be described with reference to the appended drawings. Note that, in the following explanation and drawings, the same reference symbols are appended to elements that are the same or equivalent, and duplicated explanation thereof will be omitted.
The first embodiment of the present invention will now be explained with reference to
The schematic configuration of a reproduction device 100A according to the first embodiment is shown as a block diagram in
As shown in
The audio source unit 110 comprises a non-volatile storage element. Musical piece contents information MCD is stored in the audio source unit 110.
As shown in
Returning to
And, according to a reproduction command PC2 sent from the control unit 190A, the musical piece providing unit 122 reads in from the audio source unit 110 musical piece audio data for a musical piece that has been designated, as a musical piece signal MD2. And, from this musical piece signal MD2, the musical piece providing unit 122 generates a musical piece L channel signal L2 and a musical piece R channel signal R2. The musical piece L channel signal L2 and the musical piece R channel signal R2 that have thus been generated are sent to the digital processing unit 130.
Here, according to control by the control unit 190A, it is arranged for the musical piece providing unit 121 and the musical piece providing unit 122 to read in musical piece audio data alternatingly from the audio source unit 110, and to generate musical piece L channel signals and musical piece R channel signals. For example it may be arranged that, when the musical piece providing unit 121 has read in the musical piece audio data that is the first in reproduction order, then the musical piece providing unit 122 reads in the musical piece audio data that is the second in reproduction order, and then the musical piece providing unit 121 reads in the musical piece audio data that is the third in reproduction order, and so on.
The digital processing unit 130 receives the musical piece L channel signal L1 (hereinafter also sometimes referred to as the “signal L1”) and the musical piece R channel signal R1 (hereinafter also sometimes referred to as the “signal R1”) sent from the musical piece providing unit 121. Moreover, the digital processing unit 130 receives the musical piece L channel signal L2 (hereinafter also sometimes referred to as the “signal L2”) and the musical piece R channel signal R2 (hereinafter also sometimes referred to as the “signal R2”) sent from the musical piece providing unit 122.
And, on the basis of the signal L1, the signal R1, the signal L2, and the signal R2, the digital processing unit 130 performs cross-fade reproduction processing and so on for the musical pieces, and generates an L channel musical piece reproduction signal CLD and an R channel musical piece reproduction signal CRD. The L channel musical piece reproduction signal CLD (hereinafter also sometimes termed the “L channel reproduction signal CLD”) and the R channel musical piece reproduction signal CRD (hereinafter also sometimes termed the “R channel reproduction signal CRD”) that have been generated in this manner are sent to the analog processing unit 150. The details of the configuration of the digital processing unit 130 will be described hereinafter.
The analog processing unit 150 receives the L channel reproduction signal CLD and the R channel reproduction signal CRD sent from the digital processing unit 130. And the analog processing unit 150 generates an audio output signal AOSL on the basis of the L channel reproduction signal CLD and generates an audio output signal AOSR on the basis of the R channel reproduction signal CRD.
The analog processing unit 150 having the functions described above comprises a D/A (Digital to Analog) conversion unit, an audio volume adjustment unit, and a power amplification unit. Here, the D/A conversion unit receives the L channel reproduction signal CLD and the R channel reproduction signal CRD sent from the digital processing unit 130. And the D/A conversion unit converts the L channel reproduction signal CLD and the R channel reproduction signal CRD to analog signals. Note that, the D/A conversion unit comprises two D/A converters that have mutually similar configurations, corresponding to the L channel and to the R channel. The results of analog conversion by the D/A conversion unit are sent to the audio volume adjustment unit.
The audio volume adjustment unit receives the results of analog conversion of the L channel and the R channel sent from the D/A conversion unit. And the audio volume adjustment unit performs audio volume adjustment processing upon the analog conversion result signals corresponding to the L channel and to the R channel. Note that, the audio volume adjustment unit comprises two electronic volume elements having mutually similar configurations, corresponding to the L channel and to the R channel. The results of audio volume adjustment by the audio volume adjustment unit are sent to the power amplification unit.
The power amplification unit receives the audio volume adjustment result signals for the L channel and the R channel sent from the audio volume adjustment unit. And the power amplification unit amplifies the power of the audio volume adjustment result signals, and generates the audio output signal AOSL and the audio output signal AOSR. The audio output signal AOSL that has been thus generated is sent to the speaker unit 160L. And the audio output signal AOSR that has been thus generated is sent to the speaker unit 160R.
The speaker unit 160L receives the audio output signal AOSL sent from the analog processing unit 150. And the speaker unit 160L outputs reproduction audio according to the audio output signal AOSL.
Moreover, the speaker unit 160R receives the audio output signal AOSR sent from the analog processing unit 150. And the speaker unit 160R outputs reproduction audio according to the audio output signal AOSR.
The input unit 180 comprises a key unit that is provided to a main body unit of the reproduction device 100A, or a remote input device or the like to which a key unit is provided. Here, a touch panel provided to a display unit not shown in the figures may be employed as a key unit that is provided to the main body unit. Moreover, instead of provision of a key unit, a configuration may also be employed that enables voice input. The result of input to the input unit 180 is sent to the control unit 190A as input data IPD.
Along with performing processing of various types, the control unit 190A also controls the overall operation of the reproduction device 100A. The details of the configuration of the control unit 190A will be described hereinafter.
The configuration of the digital processing unit 130 will now be explained.
As shown in
The extraction unit 131 receives the signal L1 and the signal R1 sent from the musical piece providing unit 121. And, from the musical piece signal MD1, the extraction unit 131 extracts a signal M1 for the audio component relating to its centrally localized region, according to the following Equation (1). Moreover, from the musical piece signal MD1, the extraction unit 131 extracts a signal Si for the audio component relating to its non-centrally localized regions, according to the following Equation (2).
M1=(L1+R1)/2 (1)
S1=(L1−R1)/2 (2)
And the extraction unit 131 sends the signal M1 and the signal S1 to the cross-fade processing unit 133.
The extraction unit 132 receives the signal L2 and the signal R2 sent from the musical piece providing unit 122. And the extraction unit 132 extracts, from the musical piece signal MD2, a signal M2 for the audio component relating to its centrally localized region, according to the following Equation (3). Moreover, the extraction unit 132 extracts, from the musical piece signal MD2, a signal S2 for the audio component relating to its non-centrally localized regions, according to the following Equation (4).
M2=(L2+R2)/2 (3)
S2=(L2−R2)/2 (4)
And the extraction unit 132 sends the signal M2 and the signal S2 to the cross-fade processing unit 133.
The cross-fade processing unit 133 performs cross-fade processing upon the musical piece signal MD1 and the musical piece signal MD2. The cross-fade processing unit 133 having the above function comprises a first processing unit 211, a second processing unit 212, a third processing unit 213, and a fourth processing unit 214.
The first processing unit 211 receives the signal M1 sent from the extraction unit 131. And the first processing unit 211 multiplies the signal M1 by a gain designated by a first gain command XC1 sent from the control unit 190A, and thereby generates a signal MX1. The signal MX1 that has thus been generated is sent to the stereo signal extraction unit 135.
The second processing unit 212 receives the signal S1 sent from the extraction unit 131. And the second processing unit 212 multiplies the signal Si by a gain designated by a second gain command XC2 sent from the control unit 190A, and thereby generates a signal SX1. The signal SX1 that has thus been generated is sent to the stereo signal extraction unit 135.
The third processing unit 213 receives the signal M2 sent from the extraction unit 132. And the third processing unit 213 multiplies the signal M2 by a gain designated by a third gain command XC3 sent from the control unit 190A, and thereby generates a signal MX2. The signal MX2 that has thus been generated is sent to the stereo signal extraction unit 136.
The fourth processing unit 214 receives the signal S2 sent from the extraction unit 132. And the fourth processing unit 214 multiplies the signal S2 by a gain designated by a fourth gain command XC4 sent from the control unit 190A, and thereby generates a signal SX2. The signal SX2 that has thus been generated is sent to the stereo signal extraction unit 136.
The stereo signal extraction unit 135 receives the signal MX1 sent from the first processing unit 211. Moreover, the stereo signal extraction unit 135 receives the signal SX1 sent from the second processing unit 212. And the stereo signal extraction unit 135 extracts a signal LC1 for the L channel, according to the following Equation (5). Moreover, the stereo signal extraction unit 135 extracts a signal RC1 for the R channel, according to the following Equation (6).
LC1=(MX1+SX1)/2 (5)
RC1=(MX1−SX1)/2 (6)
And the stereo signal extraction unit 135 sends the signal LC1 to the signal addition unit 137L. Furthermore, the stereo signal extraction unit 135 sends the signal RC1 to the signal addition unit 137R.
The stereo signal extraction unit 136 receives the signal MX2 sent from the third processing unit 213. Moreover, the stereo signal extraction unit 136 receives the signal SX2 sent from the fourth processing unit 214. And the stereo signal extraction unit 136 extracts a signal LC2 for the L channel, according to the following Equation (7). Moreover, the stereo signal extraction unit 136 extracts a signal RC2 for the R channel, according to the following Equation (8).
LC2=(MX2+SX2)/2 (7)
RC2=(MX2−SX2)/2 (8)
And the stereo signal extraction unit 136 sends the signal LC2 to the signal addition unit 137L. Furthermore, the stereo signal extraction unit 136 sends the signal RC2 to the signal addition unit 137R.
The signal addition unit 137L receives the signal LC1 sent from the stereo signal extraction unit 135. Moreover, the signal addition unit 137L receives the signal LC2 sent from the stereo signal extraction unit 136. And the signal addition unit 137L adds together the signal LC1 and the signal LC2 to generate an L channel musical piece reproduction signal CLD. The L channel musical piece reproduction signal CLD that has thus been generated is sent to the analog processing unit 150.
The signal addition unit 137R receives the signal RC1 sent from the stereo signal extraction unit 135. Moreover, the signal addition unit 137R receives the signal RC2 sent from the stereo signal extraction unit 136. And the signal addition unit 137R adds together the signal RC1 and the signal RC2 to generate an R channel musical piece reproduction signal CRD. The R channel musical piece reproduction signal CRD that has thus been generated is sent to the analog processing unit 150.
The configuration of the control unit 190A will now be explained.
As shown in
The storage unit 191A includes a non-volatile storage element. In the first embodiment, audio recording time information RTI and cross-fade processing information XFA are included in the storage unit 191A.
As shown in
As shown in
The first interval processing information XFA1 mentioned above is information, for the audio components (i.e. the signals M1 and M2) relating to the centrally localized regions, relating to processing modes for first intervals from the start of fade-out processing for the preceding musical piece until the end of fade-in processing for the succeeding musical piece (hereinafter also sometimes simply termed “first intervals”). Moreover, the second interval processing information XFA2 mentioned above is information, for the audio components (i.e. the signals S1 and S2) relating to the non-centrally localized regions, relating to processing modes for second intervals from the start of fade-out processing for the preceding musical piece until the end of fade-in processing for the succeeding musical piece (hereinafter also sometimes simply termed “second intervals”).
Here, the first interval processing information XFA1 includes the following information items (a1) through (g1) relating to processing of the first interval for the audio component relating to the centrally localized regions:
(a1) ΔToutMID,START: start time point information for fade-out
(b1) ΔTMID,END: end time point information for fade-out
(c1) GoutMID: gain information for fade-out
(d1) ΔTinMID,START: start time point information for fade-in
(e1) ΔTinMID,END: end time point information for fade-in
(f1) GinMID: gain information for fade-in
(g1) ΔT: (end time point tPR,end of preceding musical piece)−(start time point tSU,start of succeeding musical piece)
Moreover, the second interval processing information XFA2 includes the following information items (a2) through (g2) relating to processing of the second interval for the audio components relating to the non-centrally localized regions:
(a2) ΔToutSIDE,START: start time point information for fade-out
(b2) ΔToutSIDE,END: end time point information for fade-out
(c2) GoutSIDE: gain information for fade-out
(d2) ΔTinSIDE,START: start time point information for fade-in
(e2) ΔTinSIDE,END: end time point information for fade-in
(f2) GinSIDE: gain information for fade-in
(g2) ΔT: (end time point tPR,end of preceding musical piece)−(start time point tSU,start of succeeding musical piece)
In the first embodiment, the information items (a1) through (g1) and (a2) through (g2) are obtained in advance on the basis of experiment, simulation, experience, and so on. Note that, (g1) and (g2) have the same value.
The relationship between the information items (a1) through (g1) and the information items (a2) through (g2) and the timing of the cross-fade reproduction of the musical pieces is shown in
Moreover, taking the time point tSU,start of the start of the succeeding musical piece as a reference, (d1) ΔTinMID,START, (e1) ΔTinMID,END, (d2) ΔTinSIDE,START, and (e2) ΔTinSIDE,END are information items relating to the time intervals elapsed from that time point. Note that, in the first embodiment it is supposed that, for (d2), “ΔTinSIDE,START=0” holds.
Here, for the audio component relating to the centrally localized region of the preceding musical piece, the start time point for the fade-out processing is taken as being the time point τoutMID,START, and the end time point for the fade-out processing is taken as being the time point τoutMID,END. Moreover, for the audio component relating to the centrally localized region of the succeeding musical piece, the start time point for the fade-in processing is taken as being the time point τinMID,START, and the end time point for the fade-in processing is taken as being the time point τinMID,END.
In this case, the gain GoutMID applied to the fade-out processing for the audio component relating to the centrally localized region of the preceding musical piece is given by the following Equation (9):
GoutMID=foutMID(t−τoutMID,START) (τoutMID,START≤t≤τoutMID,END) (9)
Here, foutMID(t−τoutMID,START) is a function that is determined in advance, with foutMID(0)=1 and foutMID(τoutMID,END−τoutMID,START)=0.
Moreover, the gain GinMID applied to the fade-in processing for the audio component relating to the centrally localized region of the succeeding musical piece is given by the following Equation (10):
GinMID=finMID(t−τinMID,START) (τinMID,START≤t≤τinMID,END) (10)
Here, finMID(t−τinMID,START) is a function that is determined in advance, with finMID(0)=0 and finMID(τinMID,END−τinMID,START)=1.
Moreover, for the audio components relating to the non-centrally localized regions of the preceding musical piece, the start time point for the fade-out processing is taken as being the time point τoutSIDE,START, and the end time point for the fade-out processing is taken as being the time point τoutSIDE,END. Furthermore, for the audio components relating to the non-centrally localized regions of the succeeding musical piece, the start time point for the fade-in processing is taken as being the time point τinSIDE,START, and the end time point for the fade-in processing is taken as being the time point τinSIDE,END.
In this case, the gain GoutSIDE applied to the fade-out processing for the audio components relating to the non-centrally localized regions of the preceding musical piece is given by the following Equation (11):
GoutSIDE=foutSIDE(t−τoutSIDE,START) (τoutSIDE,START≤t≤τoutSIDE,END) (11)
Here, foutSIDE(t−τoutSIDE,START) is a function that is determined in advance, with foutSIDE(0)=1 and foutSIDE(τoutSIDE,END−τoutSIDE,START)=0.
Moreover, the gain GinSIDE applied to the fade-in processing for the audio components relating to the non-centrally localized regions of the succeeding musical piece is given by the following Equation (12):
GinSIDE=finSIDE(t−τinSIDE,START) (τinSIDE,START≤t≤τinSIDE,END) (12)
Here, finSIDE(t−τinSIDE,START) is a function that is determined in advance, with finSIDE(0)=0 and finSIDE(τinSIDE,END−τinSIDE,START)=1.
Note that, the gain foutMID(t−τoutMID,START), the gain finMID(t−τinMID,START), the gain foutSIDE(t−τoutSIDE,STAR) and the gain finSIDE(t−τinSIDE,START) employed in the first embodiment are shown in
Here, as shown in
Furthermore, in the first embodiment, it is arranged for the fade-out processing relating to the centrally localized regions to end after the start of the fade-in processing relating to the centrally processed regions. In other words, it is arranged for cross-fade reproduction also to be performed for the audio components relating to the centrally localized regions.
Returning to
Before reproducing the contents of the musical pieces, the reproduction control unit 192A having the above described function first accesses the audio source unit 110, and acquires the musical piece contents identifiers #p and the audio recording time items #p included in the musical piece contents information MCI. Next, the reproduction control unit 192A creates audio recording time information RTI on the basis of these musical piece contents identifiers #p and these audio recording time items #p that have been acquired. And next the reproduction control unit 192A stores this audio recording time information RTI in the storage unit 191A.
Subsequently, on the basis of the audio recording time information RTI and the cross-fade processing information XFA, the reproduction control unit 192A creates a musical piece reproduction plan for the musical piece contents, and stores this plan internally. The details of the processing for creation of this musical piece reproduction plan by the reproduction control unit 192A will be described hereinafter.
Furthermore, the reproduction control unit 192A receives the input data IPD sent from the input unit 180. When the contents of this input data IPD is a specification for reproduction of the contents of musical pieces, then the reproduction control unit 192A generates a reproduction command PCI according to the musical piece reproduction plan and sends this command to the musical piece providing unit 121, and also generates a reproduction command PC2 and sends it to the musical piece providing unit 122.
Moreover, the reproduction control unit 192A generates gain commands XC and sends them to the cross-fade processing unit 133, and performs reproduction control of the musical piece contents. During this reproduction control, the reproduction control unit 192A performs control for fade-out processing for the preceding musical piece, which is the musical piece that is to be reproduced first, and then performs control for fade-in processing for the succeeding musical piece, which is the musical piece that is to be reproduced next. Here, the gain commands XC include a first gain command XC1, a second gain command XC2, a third gain command XC3, and a fourth gain command XC4.
In the first embodiment, during control for cross-fade reproduction of the musical piece contents, it is arranged for the reproduction control unit 192A to perform control for fade-out processing for the preceding musical piece and to perform control for fade-in processing for the succeeding musical piece, for the audio components relating to the centrally localized region and also for the audio components relating to the non-centrally localized region.
During the above control, when the centrally localized signal for the preceding musical piece is M1, the non-centrally localized signal for the preceding musical piece is S1, the centrally localized signal for the succeeding musical piece is M2, and the non-centrally localized signal for the succeeding musical piece is S2, then the reproduction control unit 192A performs control for performing fade-out processing upon the signals M1 and S1, and performs control for performing fade-in processing upon the signals M2 and S2. In other words, the reproduction control unit 192A generates the first gain command XC1 for performing fade-out processing upon the signal M1 relating to the centrally localized region of the preceding musical piece and sends the first gain command to the first processing unit 211, and generates the third gain command XC3 for performing fade-in processing upon the signal M2 relating to the centrally localized region of the succeeding musical piece and sends the third gain command to the third processing unit 213 (this is control of the first interval for the audio signals relating to the centrally localized regions).
Moreover, in this case, the reproduction control unit 192A generates the second gain command XC2 for performing fade-out processing upon the signal S1 relating to the non-centrally localized regions of the preceding musical piece and sends the second gain command to the second processing unit 212, and generates the fourth gain command XC4 for performing fade-in processing upon the signal S2 relating to the non-centrally localized regions of the succeeding musical piece and sends the fourth gain command to the fourth processing unit 214 (this is control of the second interval for the audio signals relating to the non-centrally localized regions).
On the other hand, when the centrally localized signal for the preceding musical piece is the signal M2, the non-centrally localized signal for the preceding musical piece is the signal S2, the centrally localized signal for the succeeding musical piece is the signal M1, and the non-centrally localized signal for the succeeding musical piece is the signal S1, then the reproduction control unit 192A performs control for performing fade-out processing upon the signals M2 and S2, and performs control for performing fade-in processing upon the signals M1 and S1. In other words, the reproduction control unit 192A generates the third gain command XC3 for performing fade-out processing upon the signal M2 relating to the centrally localized region of the preceding musical piece and sends the third gain command to the third processing unit 213, and generates the first gain command XC1 for performing fade-in processing upon the signal M1 relating to the centrally localized region of the succeeding musical piece and sends the first gain command to the first processing unit 211 (this is control of the first interval for the audio signals relating to the centrally localized regions).
Moreover, in this case, the reproduction control unit 192A generates the fourth gain command XC4 for performing fade-out processing upon the signal S2 relating to the non-centrally localized regions of the preceding musical piece and sends the fourth gain command to the fourth processing unit 214, and generates the second gain command XC2 for performing fade-in processing upon the signal S1 relating to the non-centrally localized regions of the succeeding musical piece and sends the second gain command to the second processing unit 212 (this is control of the second interval for the audio signals relating to the non-centrally localized regions).
The details of the reproduction control procedure performed by the reproduction control unit 192A will be described hereinafter.
The operation of the reproduction device 100A having the configuration described above will now be explained, with principal emphasis being given to the processing for the cross-fade reproduction of musical pieces.
In the following explanation it will be supposed that the preceding musical piece is the musical piece #1 and the succeeding musical piece is the musical piece #2, and the explanation will concentrate upon the control for cross-fade reproduction between the musical piece #1 and the musical piece #2.
First, the processing by the reproduction control unit 192A for creating a musical piece reproduction plan for the contents of the musical pieces will be explained. This processing for creating the musical piece reproduction plan is performed before reproduction of the contents of the musical pieces.
When creating the musical piece reproduction plan for the contents of the musical pieces, first the reproduction control unit 192A accesses the audio source unit 110, and acquires the musical piece contents identifiers #p and the audio recording time items #p included in the musical piece contents information MCI. And next, upon the basis of the musical piece contents identifiers #p and the audio recording time items #p, the reproduction control unit 192A creates audio recording time information RTI and stores it in the storage unit 191A.
Next, the reproduction control unit 192A acquires the time interval information ΔToutMID,START, ΔTout MID,END, ΔTinMID,start, and ΔTinMID,END of the first interval processing information XFA1 in the cross-fade processing information XFA, and acquires the time interval information ΔToutSIDE,START, ΔToutSIDE,END(=0), ΔTinSIDE,start(=0), and ΔTinSIDE,END of the second interval processing information XFA2 in the cross-fade processing information XFA. Subsequently, on the basis of this time interval information that has thus been acquired and the audio recording time information RTI, the reproduction control unit 192A creates a musical piece reproduction plan in which the reproduction start time point of the musical piece #1 is taken as being “0”.
The details of a musical piece reproduction plan when the preceding musical piece is the musical piece #1 and the succeeding musical piece is the musical piece #2 are shown in
Next, the musical piece reproduction processing of the musical piece contents by the reproduction device 100A will be explained.
For the musical piece reproduction processing of the musical piece contents by the reproduction device 100A, a reproduction command for the musical piece contents is inputted to the input unit 180, and the reproduction control unit 192A starts operation upon receipt of input data IPD whose contents are that reproduction command.
Upon receipt of this input data IPD, according to the musical piece reproduction plan, the reproduction control unit 192A generates a reproduction command PC1 in which the musical piece #1 is specified, and sends the reproduction command to the musical piece providing unit 121 (at the time point t=0).
In the following explanation, taking the time point when the reproduction command PC1 is sent to the musical piece providing unit 121 as being the time point “0”, the reproduction processing of the musical piece contents by the reproduction control unit 192A subsequent to the time point t=0 will be explained in order for the successive time points. In more detail, in the following, the cross-fade processing for the musical piece contents will be explained in order with reference to the time points t=“0”, “τoutSIDE,START=τinSIDE,START”, “τoutMID,START=τinMID,START”, “τoutMID,END=τinMID,END”, and “τoutSIDE,END=τinSIDE,END” (refer to
Moreover, at the time point t=0, the reproduction control unit 192A sends a first gain command XC1 specifying a gain of “1.0” to the first processing unit 211. And, at the time point t=0, the reproduction control unit 192A sends a second gain command XC2 specifying a gain of “1.0” to the second processing unit 212.
Upon receipt of the above reproduction command PC1, the musical piece providing unit 121 reads in the musical piece audio data item #1 from the audio source unit 110 as the musical piece signal MD1, and creates a signal L1 and a signal R1 on the basis of this musical piece signal MD1. And the musical piece providing unit 121 sends the signal L1 and the signal R1 that it has thus created to the digital processing unit 130.
In the digital processing unit 130, the extraction unit 131 extracts the signal M1 from the signal L1 and the signal R1 according to Equation (1), and extracts the signal S1 from the signal L1 and the signal R1 according to Equation (2). And the extraction unit 131 sends the signal M1 to the first processing unit 211 and sends the signal S1 to the second processing unit 212.
Upon receipt of the signal M1, the first processing unit 211 takes the signal M1 as being the signal MX1 according to the first gain command XC1 (gain “1.0”), and sends it to the stereo signal extraction unit 135. Moreover, upon receipt of the signal S1, the second processing unit 212 takes the signal S1 as being the signal SX1 according to the second gain command XC2 (gain “1.0”), and sends it to the stereo signal extraction unit 135.
Upon receipt of the signal MX1 and the signal SX1, the stereo signal extraction unit 135 extracts the signal LC1 for the L channel(=the signal L1) according to Equation (5), and extracts the signal RC1 for the R channel(=the signal R1) according to Equation (6). And the stereo signal extraction unit 135 sends the signal LC1 to the signal addition unit 137L and sends the signal RC1 to the signal addition unit 137R.
The signal addition unit 137L takes the signal LC1 sent from the stereo signal extraction unit 135 as being the L channel musical piece reproduction signal CLD, and sends it to the analog processing unit 150. Moreover, the signal addition unit 137R takes the signal RC1 sent from the stereo signal extraction unit 135 as being the R channel musical piece reproduction signal CRD, and sends it to the analog processing unit 150.
In the analog processing unit 150, processing is performed in sequence by the D/A conversion unit, the audio volume adjustment unit, and the power amplification unit, and audio output signals AOSL and AOSR are created from the L channel reproduction signal CLD and the R channel reproduction signal CRD and are sent to the speaker units 160L and 160R (refer to
Thereafter, when the time point “τoutSIDE,START=τinSIDE,START” arrives (refer to
In the digital processing unit 130, the extraction unit 132 extracts the signal M2 from the signal L2 and the signal R2 according to Equation (3), and extracts the signal S2 from the signal L2 and the signal R2 according to Equation (4). And the extraction unit 132 sends the signal M2 to the third processing unit 213, and sends the signal S2 to the fourth processing unit 214.
Moreover, when the time point “τoutSIDE,START=τinSIDE,START” arrives, in order to start the fade-out processing for the signal S1 relating to the non-centrally localized regions of the preceding musical piece (i.e. of the musical piece #1), the reproduction control unit 192A refers to the cross-fade processing information XFA and generates a second gain command XC2 that specifies the gain foutSIDE (refer to Equation (11)), and sends the command to the second processing unit 212. Moreover, when the time point “τoutSIDE,START=τinSIDE,START” arrives, in order to start the fade-in processing for the signal S2 relating to the non-centrally localized regions of the succeeding musical piece (i.e. of the musical piece #2), the reproduction control unit 192A generates a fourth gain command XC4 that specifies the gain finSIDE (refer to Equation (12)), and sends the command to the fourth processing unit 214.
Moreover, at this time, the reproduction control unit 192A sends a first gain command XC1 specifying a gain of “1.0” to the first processing unit 211, and sends a third gain command XC3 specifying a gain of “0” to the third processing unit 213.
At this time, for the centrally localized audio components, the first processing unit 211 takes the signal M1 as being the signal MX1, and sends it to the stereo signal extraction unit 135. Moreover, the third processing unit 213 sends a signal MX2 obtained by multiplying the signal M2 by the gain “0”, i.e. a null audio signal, to the stereo signal extraction unit 136.
On the other hand, for the non-centrally localized audio components, the second processing unit 212 sends a signal SX1 obtained by multiplying the signal S1 by the gain foutSIDE to the stereo signal extraction unit 135. Moreover, the fourth processing unit 214 sends a signal SX2 obtained by multiplying the signal S2 by the gain finSIDE to the stereo signal extraction unit 136.
The stereo signal extraction unit 135 extracts the signal LC1 for the L channel and the signal RC1 for the R channel from the signal MX1(=the signal M1) and the signal SX1(=the signal S1×foutSIDE). And the stereo signal extraction unit 135 sends the signal LC1 to the signal addition unit 137L, and sends the signal RC1 to the signal addition unit 137R.
Moreover, the stereo signal extraction unit 136 extracts the signal LC2 for the L channel and the signal RC2 for the R channel from the signal SX2(=the signal S2×finSIDE). And the stereo signal extraction unit 136 sends the signal LC2 to the signal addition unit 137L, and sends the signal RC2 to the signal addition unit 137R.
The signal addition unit 137L adds together the signal LC1 sent from the stereo signal extraction unit 135 and the signal LC2 sent from the stereo signal extraction unit 136 to generate an L channel musical piece reproduction signal CLD, and sends the signal CLD to the analog processing unit 150. Moreover, the signal addition unit 137R adds together the signal RC1 sent from the stereo signal extraction unit 135 and the signal RC2 sent from the stereo signal extraction unit 136 to generate an R channel musical piece reproduction signal CRD, and sends the signal CRD to the analog processing unit 150.
Subsequently, audio output signals AOSL and AOSR are generated by the analog processing unit 150. And the speaker units 160L and 160R reproduce audio outputs according to these audio output signals AOSL and AOSR.
Thereafter, when the time point “τoutMID,START=τinMID,START” arrives (refer to
Moreover, at this time, the reproduction control unit 192A sends a second gain command XC2 specifying the gain foutSIDE to the second processing unit 212, and sends a fourth gain command XC4 specifying the gain finSIDE to the fourth processing unit 214.
For the centrally localized audio component, the first processing unit 211 sends a signal MX1 obtained by multiplying the signal M1 by the gain foutMID to the stereo signal extraction unit 135. Moreover, upon receipt of the signal M2, the third processing unit 213 sends a signal MX2 obtained by multiplying the signal M2 by the gain finMID to the stereo signal extraction unit 136.
On the other hand, for the non-centrally localized audio components, the second processing unit 212 sends a signal SX1 obtained by multiplying the signal Si by the gain foutSIDE to the stereo signal extraction unit 135. Moreover, the fourth processing unit 214 sends a signal SX2 obtained by multiplying the signal S2 by the gain finSIDE to the stereo signal extraction unit 136.
The stereo signal extraction unit 135 extracts the signal LC1 for the L channel and the signal RC1 for the R channel from the signal MX1(=the signal M1×foutMID) and the signal SX1(=the signal S1×foutSIDE). And the stereo signal extraction unit 135 sends the signal LC1 to the signal addition unit 137L, and sends the signal RC1 to the signal addition unit 137R.
Moreover, the stereo signal extraction unit 136 extracts the signal LC2 for the L channel and the signal RC2 for the R channel from the signal MX2(=the signal M2×finMID) and the signal SX2(=the signal S2×finSIDE). And the stereo signal extraction unit 136 sends the signal LC2 to the signal addition unit 137L, and sends the signal RC2 to the signal addition unit 137R.
The signal addition unit 137L adds together the signal LC1 sent from the stereo signal extraction unit 135 and the signal LC2 sent from the stereo signal extraction unit 136 to generate an L channel musical piece reproduction signal CLD, and sends the signal CLD to the analog processing unit 150. Moreover, the signal addition unit 137R adds together the signal RC1 sent from the stereo signal extraction unit 135 and the signal RC2 sent from the stereo signal extraction unit 136 to generate an R channel musical piece reproduction signal CRD, and sends the signal CRD to the analog processing unit 150.
Subsequently, audio output signals AOSL and AOSR are generated by the analog processing unit 150. And the speaker units 160L and 160R reproduce audio outputs according to these audio output signals AOSL and AOSR.
Thereafter, when the time point “τoutMID,END=τinMID,END” arrives (refer to
Moreover, at this time, the reproduction control unit 192A sends a second gain command XC2 specifying the gain as foutSIDE to the second processing unit 212, and sends a fourth gain command XC4 specifying the gain as finSIDE to the fourth processing unit 214.
For the centrally localized audio component, the first processing unit 211 sends a signal MX1 obtained by multiplying the signal M1 by the gain “0” (i.e. a null audio signal) to the stereo signal extraction unit 135. Moreover, upon receipt of the signal M2, the third processing unit 213 employs the signal M2 as the signal MX2, and send it to the stereo signal extraction unit 136.
On the other hand, for the non-centrally localized audio components, the second processing unit 212 sends a signal SX1 obtained by multiplying the signal Si by the gain foutSIDE to the stereo signal extraction unit 135. Moreover, upon receipt of the signal S2, the fourth processing unit 214 sends a signal SX2 obtained by multiplying the signal S2 by the gain finSIDE to the stereo signal extraction unit 136.
The stereo signal extraction unit 135 extracts the signal LC1 for the L channel and the signal RC1 for the R channel from the signal SX1(=the signal S1×foutSIDE). And the stereo signal extraction unit 135 sends the signal LC1 to the signal addition unit 137L, and sends the signal RC1 to the signal addition unit 137R.
Moreover, the stereo signal extraction unit 136 extracts the signal LC2 for the L channel and the signal RC2 for the R channel from the signal MX2(=the signal M2) and the signal SX2(=the signal S2×finSIDE). And the stereo signal extraction unit 136 sends the signal LC2 to the signal addition unit 137L, and sends the signal RC2 to the signal addition unit 137R.
The signal addition unit 137L adds together the signal LC1 sent from the stereo signal extraction unit 135 and the signal LC2 sent from the stereo signal extraction unit 136 to generate an L channel musical piece reproduction signal CLD, and sends the signal CLD to the analog processing unit 150. Moreover, the signal addition unit 137R adds together the signal RC1 sent from the stereo signal extraction unit 135 and the signal RC2 sent from the stereo signal extraction unit 136 to generate an R channel musical piece reproduction signal CRD, and sends the signal CRD to the analog processing unit 150.
Subsequently, audio output signals AOSL and AOSR are generated by the analog processing unit 150. And the speaker units 160L and 160R reproduce audio outputs according to these audio output signals AOSL and AOSR.
Thereafter, when the time point “τoutSIDE,END=TinSIDE,END” arrives (refer to
Moreover, at this time, the reproduction control unit 192A sends a third gain command XC3 specifying the gain as “1.0” to the third processing unit 213.
For the centrally localized audio component, the third processing unit 213 employs the signal M2 as the signal MX2, and sends it to the stereo signal extraction unit 136. On the other hand, for the non-centrally localized audio components, the fourth processing unit 214 employs the signal S2 as the signal SX2, and sends it to the stereo signal extraction unit 136.
Moreover, the stereo signal extraction unit 136 extracts the signal LC2 for the L channel and the signal RC2 for the R channel from the signal MX2(=the signal M2) and the signal SX2(=the signal S2). And the stereo signal extraction unit 136 sends the signal LC2 to the signal addition unit 137L, and sends the signal RC2 to the signal addition unit 137R.
The signal addition unit 137L takes the signal LC2 sent from the stereo signal extraction unit 136 as an L channel musical piece reproduction signal CLD, and sends the signal CLD to the analog processing unit 150. Moreover, the signal addition unit 137R takes the signal RC2 sent from the stereo signal extraction unit 136 as an R channel musical piece reproduction signal CRD, and sends the signal CRD to the analog processing unit 150.
Subsequently, audio output signals AOSL and AOSR are generated by the analog processing unit 150. And the speaker units 160L and 160R reproduce audio outputs according to these audio output signals AOSL and AOSR.
The timing of the cross-fade reproduction processing of the musical piece contents performed as described above is shown in
As explained above, in the first embodiment, the musical piece providing unit 121 generates a musical piece L channel signal L1 and a musical piece R channel signal R1 from the musical piece signal MD1, and the musical piece providing unit 122 generates a musical piece L channel signal L2 and a musical piece R channel signal R2 from the musical piece signal MD2. And next, the extraction unit 131 extracts the centrally localized audio component signal M1 and the non-centrally localized audio components signal Si from the musical piece L channel signal L1 and the musical piece R channel signal R1. Moreover, the extraction unit 132 extracts the centrally localized audio component signal M2 and the non-centrally localized audio components signal S2 from the musical piece L channel signal L2 and the musical piece R channel signal R2.
Then, when performing cross-fade reproduction of the preceding musical piece and the succeeding musical piece, under the control of the reproduction control unit 192A, the cross-fade processing unit 133 performs fade-out processing for the preceding musical piece and performs fade-in processing for the succeeding musical piece, upon each of the audio component signals (M1 and M2) relating to the centrally localized regions and each of the audio component signals (S1 and S2) relating to the non-centrally localized regions.
During this control of the fade-out processing and the fade-in processing, the reproduction control unit 192A makes the first interval length of the first interval relating to the centrally localized regions from the start of fade-out processing to the end of fade-in processing be shorter than the second interval length of the second interval relating to the non-centrally localized regions from the start of fade-out processing to the end of fade-in processing, and also ensures that the first interval is completely contained within the second interval. Moreover, during this control, it is arranged to perform cross-fade reproduction by ensuring that the fade-out processing relating to the non-centrally localized regions is ended after the start of the fade-in processing relating to the non-centrally localized regions. Furthermore, during this control, it is arranged to perform cross-fade reproduction by ensuring that the fade-out processing relating to the centrally localized regions is ended after the start of the fade-in processing relating to the centrally localized regions.
In this manner, with the first embodiment, during changeover from the preceding musical piece to the succeeding musical piece with cross-fade reproduction, the length of the changeover for the audio components relating to the centrally localized regions is made to be quicker, as compared to the length of the changeover for the audio components relating to the non-centrally localized regions.
Due to this, in the first embodiment, it is possible to suppress failure of musical harmony in the centrally localized regions.
Moreover, by making the second interval be longer than the first interval, it is possible to impart a sense of realism, due to the cross-fade reproduction for the audio components relating to the non-centrally localized regions.
Thus, according to the first embodiment, when cross-fade reproduction is performed for musical pieces having a plurality of channels, it is possible to suppress the occurrence of a sense of auditory discomfort during the cross-fade interval.
The second embodiment of the present invention will now be explained with reference to
In comparison to the reproduction device 100A of the first embodiment described above, the reproduction device of the second embodiment differs by the feature of being provided with a control unit 190B having the configuration shown in
As shown in
The storage unit 191B comprises a non-volatile storage element. In the second embodiment, audio recording time information RTI, analysis information ANI, and cross-fade processing information XFB are included in the storage unit 191B.
As shown in
Note that, for the musical piece contents #1 which is the first in reproduction order, only the analysis information ANI#1MID,END and the analysis information ANI#1SIDE,END for the final portion thereof are included. Moreover, for the musical piece contents #P which is the last in reproduction order, only the analysis information ANI#PMID,START and the analysis information ANI#PSIDE,START for the initial portion thereof are included. In the second embodiment, the analysis information is tonal characteristic information that specifies the spectral envelope or the detailed spectral structure of the musical piece, or the impression given by its harmonic structure or the like, or its musical harmony. The analysis information ANI is generated by the reproduction control unit 192B and is stored in the storage unit 191B.
As shown in
Here, for the audio components relating to the centrally localized regions, the first interval processing information #q˜#(q+1) is information relating to the processing mode for the first interval from the start of the fade-out processing of the musical piece #q (i.e. the preceding musical piece) to the end of the fade-in processing for the musical piece #(q+1) (i.e. the succeeding musical piece). The information items (a1) through (g1) described above are information included in the first interval processing information #q˜#(q+1).
Moreover, for the audio components relating to the non-centrally localized regions, the second interval processing information #q˜#(q+1) is information relating to the processing mode for the second interval from the start of fade-out processing of the musical piece #q (i.e. the preceding musical piece) to the end of the fade-in processing for the musical piece #(q+1) (i.e. the succeeding musical piece). The information items (a2) through (g2) described above are information included in the second interval processing information #q˜#(q+1). The abovementioned cross-fade information XFB is determined by the reproduction control unit 192B, and is stored in the storage unit 191B.
Returning to
Furthermore, the reproduction control unit 192B generates the analysis information ANI. When generating the analysis information ANI, the reproduction control unit 192B receives from the digital processing unit 130 the signals (M1 and M2) for the audio components relating to the centrally localized regions of the initial portions of the musical piece audio data for the musical piece #1, the musical piece #2, . . . the musical piece #P. And the reproduction control unit 192B analyses the tonal characteristics of these signals, and generates the set of analysis information ANI#pMID,START. Moreover, the reproduction control unit 192B receives from the digital processing unit 130 the signals (M1 and M2) for the audio components relating to the centrally localized regions of the final portions of the musical piece audio data for the musical piece #1, the musical piece #2, . . . the musical piece #P. And the reproduction control unit 192B analyses the tonal characteristics of these signals, and generates the set of analysis information ANI#pMID,END.
Yet further, the reproduction control unit 192B receives from the digital processing unit 130 the signals (S1 and S2) for the audio components relating to the non-centrally localized regions of the initial portions of the musical piece audio data for the musical piece #1, the musical piece #2, . . . the musical piece #P. And the reproduction control unit 192B analyses the tonal characteristics of these signals, and generates the analysis information ANI#pSIDE,START. And the reproduction control unit 192B receives from the digital processing unit 130 the signals (S1 and S2) for the audio components relating to the non-centrally localized regions of the final portions of the musical piece audio data for the musical piece #1, the musical piece #2, . . . the musical piece #P. And the reproduction control unit 192B analyses the tonal characteristics of these signals, and generates the analysis information ANI#pSIDE,END.
The reproduction control unit 192B stores these sets of analysis information ANI#pMID,START, ANI#pMID,END, ANI#pSIDE,START, and ANI#pSIDE,END in the storage unit 191B.
Furthermore, the reproduction control unit 192B generates cross-fade processing information XFB. When generating the cross-fade processing information XFB, the reproduction control unit 192B determines and generates the first interval processing information #q˜#(q+1) on the basis of the analysis information ANI#qMID,END for the audio component relating to the centrally localized region of the final portion of the musical piece #q (i.e. the first analysis information), and on the basis of the analysis information ANI#(q+1)MID,START for the audio component relating to the centrally localized region of the initial portion of the musical piece #(q+1) (i.e. the second analysis information).
Moreover, the reproduction control unit 192B determines and generates second interval processing information #q˜#(q+1) on the basis of the analysis information ANI#qSIDE,END for the audio components relating to the non-centrally localized regions of the final portion of the musical piece #q (i.e. the third analysis information), and on the basis of the analysis information ANI#(q+1)SIDE,START for the audio components relating to the non-centrally localized regions of the initial portion of the musical piece #(q+1) (i.e. the fourth analysis information).
When generating the above described first interval processing information #q˜#(q+1) and second interval processing information #q˜#(q+1), the reproduction control unit 192B compares together the analysis information ANI#qMID,END (i.e. the first analysis the second analysis information), and calculates a first degree of tonal similarity. Moreover, the reproduction control unit 192B compares together the analysis information ANI#qSIDE,END (i.e. the third analysis information) and the analysis information ANI#(q+1)SIDE,START (i.e. the fourth analysis information), and calculates a second degree of tonal similarity. Here, the higher is the degree of tonal similarity, the greater becomes the degree of resemblance between these two sets of analysis information that are compared.
Next, when (i) the first degree of tonal similarity is less than the first threshold value and the second degree of tonal similarity is greater than or equal to the second threshold value, then the reproduction control unit 192B generates first interval processing information #q˜#(q+1) in which the interval length of the first interval is set to be shorter than the interval length of the second interval, and in which the fade-out processing relating to the centrally localized region is ended before the start of the fade-in processing relating to the centrally localized region. Moreover, when the first degree of tonal similarity is less than the first threshold value and the second degree of tonal similarity is greater than or equal to the second threshold value, then the reproduction control unit 192B generates second interval processing information #q˜#(q+1) in which the fade-out processing relating to the non-centrally localized regions is ended after the start of the fade-in processing relating to the non-centrally localized regions.
An example of the timing at which cross-fade reproduction processing for the musical pieces is executed according to the first interval processing information and the second interval processing information when condition (i) is satisfied is shown in
Here, the gain function goutMID(t−τMID,START) shown in
Next, when (ii) the first degree of tonal similarity is greater than or equal to the third threshold value and the second degree of tonal similarity is less than the fourth threshold value, then the reproduction control unit 192B generates first interval processing information #q˜#(q+1) in which the interval length of the first interval is set to be longer than the interval length of the second interval, and in which the fade-out processing relating to the centrally localized region is ended after the start of the fade-in processing relating to the centrally localized region. Moreover, when the first degree of tonal similarity is greater than or equal to the third threshold value and the second degree of tonal similarity is less than the fourth threshold value, then the reproduction control unit 192B generates second interval processing information #q˜#(q+1) in which the fade-out processing relating to the non-centrally localized regions is ended before the start of the fade-in processing relating to the non-centrally localized regions.
A timing at which cross-fade reproduction processing for musical pieces is executed according to first interval processing information and second interval processing information when condition (ii) is satisfied is shown in
Here, the gain function houtMID(t−τoutMID,START) shown in
Note that, in the second embodiment, when the first degree of tonal similarity and the second degree of tonal similarity do not satisfy condition (i) or (ii) mentioned above and also it is possible, on the basis of the first degree of tonal similarity and the second degree of tonal similarity, to arrive at the evaluation that both the centrally localized audio components and also the non-centrally localized audio components of the preceding musical piece and the succeeding musical piece resemble one another, then first interval processing information #q˜#(q+1) is generated in which the interval length of the first interval is set to be shorter than the interval length of the second interval, and in which the fade-out processing relating to the centrally localized regions is ended after the start of the fade-in processing relating to the centrally localized regions. Moreover, when the condition (i) or (ii) above is not satisfied and also, on the basis of the first degree of tonal similarity and the second degree of tonal similarity, it is possible to arrive at the evaluation that both the centrally localized audio components and also the non-centrally localized audio components of the preceding musical piece and the succeeding musical piece resemble one another, then the reproduction control unit 192B generates second interval processing information #q˜#(q+1) in which the fade-out processing relating to the non-centrally localized regions is ended after the start of the fade-in processing relating to the non-centrally localized regions (refer to
Moreover, when the first degree of tonal similarity and the second degree of tonal similarity do not satisfy condition (i) or (ii) mentioned above and also it is possible, on the basis of the first degree of tonal similarity and the second degree of tonal similarity, to arrive at the evaluation that both the centrally localized audio components and the non-centrally localized audio components of the preceding musical piece and the succeeding musical piece do not resemble one another, then it is arranged to generate first interval processing information #q˜#(q+1) in which the fade-out processing relating to the centrally localized regions is ended before the start of the fade-in processing relating to the centrally localized regions, and to generate second interval processing information #q˜#(q+1) in which the fade-out processing relating to the non-centrally localized regions is ended after the start of the fade-in processing relating to the non-centrally localized regions.
Moreover, the reproduction control unit 192B generates a musical piece reproduction plan for the musical piece contents on the basis of the audio recording time information RTI and the cross-fade processing information XFB, and stores this reproduction plan internally.
Furthermore, the reproduction control unit 192B receives the input data IPD sent from the input unit 180. When the contents of this input data IPD is a reproduction command for musical piece contents, then, according to the musical piece reproduction plan, the reproduction control unit 192B generates a reproduction command PC1 and sends it to the musical piece providing unit 121, and also generates a reproduction command PC2 and sends it to the musical piece providing unit 122.
Yet further, the reproduction control unit 192B generates gain commands XC and sends them to the cross-fade processing unit 133, and performs reproduction control of the musical piece contents. The details of the reproduction control procedure performed by the reproduction control unit 192B will be described hereinafter.
The operation of the reproduction device 100B having the configuration described above will now be explained, with principal emphasis being given to the processing for the cross-fade reproduction of musical pieces.
In the following explanation it will be supposed that the preceding musical piece is musical piece #1 and the succeeding musical piece is musical piece #2, and the explanation will concentrate upon the control for cross-fade reproduction between the musical piece #1 and the musical piece #2.
As a preliminary, it will be supposed that the reproduction control unit 192B generates musical piece audio recording information RTI, analyze processing information ANI and cross-fade processing information XFB, and stores this information in the storage unit 181B. Moreover, it will be supposed that the reproduction control unit 192B generates a musical piece reproduction plan for the musical piece contents on the basis of the audio recording time information RTI and the cross-fade processing information XFB.
Suppose that, in a situation of this type, a reproduction command for the contents of musical pieces is inputted to the input unit 180, and that the reproduction control unit 192B starts to receive input data IPD which is the contents of that reproduction command.
<<Cross-Fade Reproduction Processing of the Musical Piece Contents (i)>>
First, the control of cross-fade reproduction processing of the musical piece contents by the reproduction control unit 192B on the basis of first interval processing information #1˜#2 and second interval processing information #1˜#2 generated based upon condition (i) above will be explained (refer to
Upon receipt of the input data IPD, according to the musical piece reproduction plan, the reproduction control unit 192B generates a reproduction command PC1 in which the musical piece #1 is specified, and sends the reproduction command to the musical piece providing unit 121 (at the time point t=0).
In the following explanation, the time when the reproduction control unit 192B sends the reproduction command PC1 to the musical piece providing unit 121 will be taken as the time point “0”, and the control of the reproduction processing of the musical piece contents from this time point t=0 will be explained in the order that time points elapse. In other words, in the following description, the control for the cross-fade processing of the musical piece contents will be explained by referring to the time points t=“0”, “τoutSIDE,START=τinSIDE,START”, “τoutMID,START”, “τoutMID,END”, “τinMID,START”, “τinMID,END”, and “τoutSIDE,END=τinSIDE,END” in that order (refer to
Moreover, at the time point t=0, the reproduction control unit 192B sends a first gain command XC1 specifying the gain as “1.0” to the first processing unit 211. And, at the time point t=0, the reproduction control unit 192B sends a second gain command XC2 specifying the gain as “1.0” to the second processing unit 212.
Upon receipt of the reproduction command PC1, the musical piece providing unit 121 reads in the musical piece audio data #1 as the musical piece signal MD1 from the audio source unit 110, and generates a signal L1 and a signal R1. And the musical piece providing unit 121 sends the signal L1 and the signal R1 that it has thus generated to the digital processing unit 130.
In the digital processing unit 130, the extraction unit 131 extracts a signal M1 and a signal S1 from the signal L1 and the signal R1. And the extraction unit 131 sends the signal M1 to the first processing unit 211, and sends the signal S1 to the second processing unit 212.
According to the first gain command XC1 (gain “1.0”), the first processing unit 211 sends the signal M1 to the stereo signal extraction unit 135 as the signal MX1. Moreover, according to the second gain command XC2 (gain “1.0”), the second processing unit 212 sends the signal Si to the stereo signal extraction unit 135 as the signal SX1.
Subsequently processing is performed by the stereo signal extraction unit 135, the signal addition units 137L and 137R, and the analog processing unit 150, and the audio output signals AOSL and AOSR are generated. And the speaker units 160L and 160R reproduce audio output according to these audio output signals AOSL and AOSR.
Thereafter, when the time point “τoutSIDE,START=τinSIDE,START” arrives (refer to
In the digital processing unit 130, the extraction unit 132 extracts a signal M2 and a signal S2 from the signal L2 and the signal R2. And the extraction unit 132 sends the signal M2 to the third processing unit 213, and sends the signal S2 to the fourth processing unit 212.
Furthermore, when the time point “τoutSIDE,START=τinSIDE,START” arrives, in order to start the fade-out processing for the signal S1 relating to the non-centrally localized regions of the preceding musical piece (i.e. of the musical piece #1), the reproduction control unit 192B generates a second gain command XC2 that designates the gain goutSIDE, and sends the gain command to the second processing unit 212. Furthermore, when the time point “τoutSIDE,START=τinSIDE,START” arrives, the reproduction control unit 192B generates a fourth gain command XC4 that designates the gain ginSIDE in order to start the fade-in processing for the signal S2 relating to the non-centrally localized regions of the succeeding musical piece (i.e. of the musical piece #2), and sends the gain command to the fourth processing unit 214.
Moreover, at this time, the reproduction control unit 192B sends a first gain command XC1 specifying the gain as “1” to the first processing unit 211, and sends a third gain command XC3 specifying the gain as “0” to the third processing unit 213.
At this time, for the centrally localized audio component, the first processing unit 211 takes the signal M1 as the signal MX1, which it sends to the stereo signal extraction unit 135.
On the other hand, for the non-centrally localized audio components, the second processing unit 212 multiplies the signal S1 by the gain goutSIDE to obtain a signal SX1, which it sends to the stereo signal extraction unit 135. Moreover, the fourth processing unit 214 multiplies the signal S2 by the gain ginSIDE to obtain a signal SX2, which it sends to the stereo signal extraction unit 136. In other words, cross-fade reproduction is performed for the audio components relating to the non-centrally localized regions.
Subsequently processing is performed by the stereo signal extraction units 135 and 136, the signal addition units 137L and 137R, and the analog processing unit 150, and the audio output signals AOSL and AOSR are generated. And the speaker units 160L and 160R reproduce audio output according to these audio output signals AOSL and AOSR.
Thereafter, when the time point “τoutMID,START” arrives (refer to
Note that, at this time, the reproduction control unit 192B sends a second gain command XC2 that specifies the gain goutSIDE to the second processing unit 212.
For the centrally localized audio components, the first processing unit 211 multiplies the signal M1 by the gain goutMID to obtain the signal MX1, which it sends to the stereo signal extraction unit 135. On the other hand, for the non-centrally localized audio components, the second processing unit 212 multiplies the signal S1 by the gain goutSIDE to obtain the signal SX1, which it sends to the stereo signal extraction unit 135. Moreover, for the non-centrally localized audio components, the fourth processing unit 214 multiplies the signal S2 by the gain ginSIDE to obtain the signal SX2, which it sends to the stereo signal extraction unit 136.
Subsequently processing is performed by the stereo signal extraction units 135 and 136, the signal addition units 137L and 137R, and the analog processing unit 150, and the audio output signals AOSL and AOSR are generated. And the speaker units 160L and 160R reproduce audio output according to these audio output signals AOSL and AOSR.
Thereafter, when the time point “τoutMID,END” arrives (refer to
Moreover, at this time, the reproduction control unit 192B sends a third gain command XC3 specifying the gain as “0” to the third processing unit 213. And, at this time, the reproduction control unit 192B sends a second gain command XC2 specifying the gain goutSIDE to the second processing unit 212, and sends a fourth gain command XC4 specifying the gain ginSIDE to the fourth processing unit 214.
At this time, reproduction is not performed for the audio components relating to the centrally localized regions. On the other hand, for the non-centrally localized audio components, the second processing unit 212 multiplies the signal S1 by the gain goutSIDE to obtain the signal SX1, which it sends to the stereo signal extraction unit 135. Moreover, the fourth processing unit 214 multiplies the signal S2 by the gain ginSIDE to obtain the signal SX2, which it sends to the stereo signal extraction unit 136.
Subsequently processing is performed by the stereo signal extraction units 135 and 136, the signal addition units 137L and 137R, and the analog processing unit 150, and the audio output signals AOSL and AOSR are generated. And the speaker units 160L and 160R reproduce audio output according to these audio output signals AOSL and AOSR.
Thereafter, when the time point “τinMID,START” arrives (refer to
Moreover, at this time, the reproduction control unit 192B sends a first gain command XC1 specifying the gain as “0” to the first processing unit 211. And, at this time, the reproduction control unit 192B sends a second gain command XC2 specifying the gain as goutSIDE to the second processing unit 212, and sends a fourth gain command XC4 specifying the gain as ginSIDE to the fourth processing unit 214.
At this time, for the centrally localized audio component, the third processing unit 213 multiplies the signal M2 by the gain ginMID to obtain the signal MX2, which it sends to the stereo signal extraction unit 136.
On the other hand, for the non-centrally localized audio components, the second processing unit 212 multiplies the signal S1 by the gain goutSIDE to obtain the signal SX1, which it sends to the stereo signal extraction unit 135. Moreover, the fourth processing unit 214 multiplies the signal S2 by the gain ginSIDE to obtain the signal SX2, which it sends to the stereo signal extraction unit 136.
Subsequently processing is performed by the stereo signal extraction units 135 and 136, the signal addition units 137L and 137R, and the analog processing unit 150, and the audio output signals AOSL and AOSR are generated. And the speaker units 160L and 160R reproduce audio output according to these audio output signals AOSL and AOSR.
Thereafter, when the time point “τinMID,END” arrives (refer to
Moreover, at this time, the reproduction control unit 192B generates a second gain command XC2 that designates the gain goutSIDE, and sends the command to the second processing unit 212. And the reproduction control unit 192B generates a fourth gain command XC4 that designates the gain ginSIDE, and sends the command to the fourth processing unit 214.
For the centrally localized audio components, the third processing unit 213 takes the signal M2 as the signal MX2, and sends it to the stereo signal extraction unit 136. On the other hand, for the non-centrally localized audio components, the second processing unit 212 multiplies the signal Si by the gain goutSIDE to obtain the signal SX1, which it sends to the stereo signal extraction unit 135. Moreover, for the non-centrally localized audio components, the fourth processing unit 214 multiplies the signal S2 by the gain ginSIDE to obtain the signal SX2, which it sends to the stereo signal extraction unit 136.
Subsequently processing is performed by the stereo signal extraction units 135 and 136, the signal addition units 137L and 137R, and the analog processing unit 150, and the audio output signals AOSL and AOSR are generated. And the speaker units 160L and 160R reproduce audio output according to these audio output signals AOSL and AOSR.
Thereafter, when the time point “τoutSIDE,END=τinSIDE,END” arrives (refer to
Moreover, at this time, the reproduction control unit 192B sends a third gain command XC3 specifying the gain as “1” to the third processing unit 213.
For the centrally localized audio components, the third processing unit 213 takes the signal M2 as the signal MX2, which it sends to the stereo signal extraction unit 136. On the other hand, for the non-centrally localized audio components, the fourth processing unit 214 takes the signal S2 as the signal SX2, which it sends to the stereo signal extraction unit 136.
Subsequently processing is performed by the stereo signal extraction unit 136, the signal addition units 137L and 137R, and the analog processing unit 150, and the audio output signals AOSL and AOSR are generated. And the speaker units 160L and 160R reproduce audio output according to these audio output signals AOSL and AOSR.
<<Cross-Fade Reproduction Processing of the Musical Piece Contents (ii)>>
Next, control will be explained of the cross-fade reproduction processing of the musical piece contents on the basis of the first interval processing information #1˜#2 and the second interval processing information #1˜#2 generated based upon condition (ii) described above (refer to
Upon receipt of the input data IPD, according to the musical piece reproduction plan, the reproduction control unit 192B generates a reproduction command PC1 in which the musical piece #1 is designated, and supplies the reproduction command to the musical piece providing unit 121 (at the time point t=0).
In the following explanation, taking the time point at which the reproduction command PC1 is sent to the musical piece providing unit 121 as being the time point “0”, the control by the reproduction control unit 192B of the reproduction processing of the musical piece contents from the time point t=0 onward will be explained, in order of the time points. In other words, in the following, the cross-fade processing for the musical piece contents at the time points t=“0”, “τoutMID,START”, “τoutSIDE,START”, “τoutSIDE,END”, “τinMID,START” , “τoutMID,END”, “τinSIDE,START”, “τinSIDE,END”, and “τinMID,END” will be explained in that order (refer to
Moreover, at the time point t=0, the reproduction control unit 192B sends a first gain command XC1 specifying the gain as “1.0” to the first processing unit 211. And, at the time point t=0, the reproduction control unit 192B sends a second gain command XC2 specifying the gain as “1.0” to the second processing unit 212.
The musical piece providing unit 121 reads in the musical piece audio data #1 from the audio source unit 110 as the musical piece signal MD1, and generates a signal L1 and a signal R1. And the musical piece providing unit 121 sends the signal L1 and the signal R1 that it has thus generated to the digital processing unit 130.
In the digital processing unit 130, the extraction unit 131 extracts a signal M1 and a signal S1 from the signal L1 and the signal R1. And the extraction unit 131 sends the signal M1 to the first processing unit 211, and sends the signal S1 to the second processing unit 212.
According to the first gain command XC1 (gain “1.0”), the first processing unit 211 takes the signal M1 as the signal MX1, and sends it to the stereo signal extraction unit 135. Moreover, according to the second gain command XC2 (gain “1.0”), the second processing unit 212 takes the signal S1 as the signal SX1, and sends it to the stereo signal extraction unit 135.
Subsequently processing is performed by the stereo signal extraction unit 135, the signal addition units 137L and 137R, and the analog processing unit 150, and the audio output signals AOSL and AOSR are generated. And the speaker units 160L and 160R reproduce audio output according to these audio output signals AOSL and AOSR.
Thereafter, when the time point “τoutMID,START” arrives (refer to
Moreover, at this time, the reproduction control unit 192B sends a second gain command XC2 specifying the gain as “1.0” to the second processing unit 212.
For the centrally localized audio component, the first processing unit 211 multiplies the signal M1 by the gain houtMID to obtain the signal MX1, which it sends to the stereo signal extraction unit 135. On the other hand, for the non-centrally localized audio components, the second processing unit 212 takes the signal S1 as being the signal SX1, which it sends to the stereo signal extraction unit 135.
Subsequently processing is performed by the stereo signal extraction unit 135, the signal addition units 137L and 137R, and the analog processing unit 150, and the audio output signals AOSL and AOSR are generated. And the speaker units 160L and 160R reproduce audio output according to these audio output signals AOSL and AOSR.
Thereafter, when the time point “τoutSIDE,START” arrives (refer to
Note that, at this time, the reproduction control unit 192B sends a second gain command XC1 that specifies the gain goutMID to the first processing unit 211.
For the centrally localized audio component, the first processing unit 211 multiplies the signal M1 by the gain houtMID to obtain the signal MX1, which it sends to the stereo signal extraction unit 135. On the other hand, for the non-centrally localized audio components, the second processing unit 212 multiplies the signal S1 by the gain houtSIDE to obtain the signal SX1, which it sends to the stereo signal extraction unit 135.
Subsequently processing is performed by the stereo signal extraction unit 135, the signal addition units 137L and 137R, and the analog processing unit 150, and the audio output signals AOSL and AOSR are generated. And the speaker units 160L and 160R reproduce audio output according to these audio output signals AOSL and AOSR.
Thereafter, when the time point “τoutSIDE,END” arrives (refer to
Moreover, at this time, the reproduction control unit 192B sends a first gain command XC1 that specifies the gain houtMID to the first processing unit 211.
At this time, for the centrally localized audio component, the first processing unit 211 multiplies the signal M1 by the gain houtMID to obtain the signal MX1, which it sends to the stereo signal extraction unit 135. On the other hand, no reproduction is performed for the audio components relating to the non-centrally localized regions.
Subsequently processing is performed by the stereo signal extraction unit 135, the signal addition units 137L and 137R, and the analog processing unit 150, and the audio output signals AOSL and AOSR are generated. And the speaker units 160L and 160R reproduce audio output according to these audio output signals AOSL and AOSR.
Thereafter, when the time point “τinMID,START” arrives (refer to
In the digital processing unit 130, the extraction unit 132 extracts a signal M2 and a signal S2 from the signal L2 and the signal R2. And the extraction unit 132 sends the signal M2 to the third processing unit 213, and sends the signal S2 to the fourth processing unit 214.
Furthermore, when the time point “τinMID,START” arrives, the reproduction control unit 192B generates a third gain command XC3 that designates the gain hinMID for starting the fade-in processing for the signal M2 relating to the centrally localized regions of the succeeding musical piece (i.e. of the musical piece #2), and sends the gain command to the third processing unit 213.
Moreover, at this time, the reproduction control unit 192B sends a first gain command XC1 specifying the gain as houtMID to the first processing unit 211. And, at this time, the reproduction control unit 192B sends a second gain command XC2 specifying the gain as “0” to the second processing unit 212.
At this time, for the centrally localized audio component, the first processing unit 211 multiplies the signal M1 by the gain houtMID to obtain the signal MX1, which it sends to the stereo signal extraction unit 135. Moreover, the third processing unit 213 multiplies the signal M2 by the gain hinMID to obtain the signal MX2, which it sends to the stereo signal extraction unit 136. In other words, cross-fade is performed for the audio component relating to the centrally localized region. On the other hand, reproduction is not performed for the audio components relating to the non-centrally localized regions.
Subsequently processing is performed by the stereo signal extraction units 135 and 136, the signal addition units 137L and 137R, and the analog processing unit 150, and the audio output signals AOSL and AOSR are generated. And the speaker units 160L and 160R reproduce audio output according to these audio output signals AOSL and AOSR.
Thereafter, when the time point “τoutMID,END” arrives (refer to
Moreover, at this time, the reproduction control unit 192B sends a third gain command XC3 that specifies the gain hinMID to the third processing unit 213.
For the centrally localized audio component, the third processing unit 213 multiplies the signal M2 by the gain hinMID to obtain the signal MX2, which it sends to the stereo signal extraction unit 136. On the other hand, reproduction is not performed for the non-centrally localized audio components.
Subsequently processing is performed by the stereo signal extraction unit 136, the signal addition units 137L and 137R, and the analog processing unit 150, and the audio output signals AOSL and AOSR are generated. And the speaker units 160L and 160R reproduce audio output according to these audio output signals AOSL and AOSR.
Thereafter, when the time point “τinSIDE,START” arrives (refer to
Moreover, at this time, the reproduction control unit 192B sends a third gain command XC3 that specifies the gain hinMID to the third processing unit 213.
For the centrally localized audio component, the third processing unit 213 multiplies the signal M2 by the gain hinMID to obtain the signal MX2, which it sends to the stereo signal extraction unit 136. On the other hand, for the non-centrally localized audio components, the fourth processing unit 214 multiplies the signal S2 by the gain hinSIDE to obtain the signal SX2, which it sends to the stereo signal extraction unit 136.
Subsequently processing is performed by the stereo signal extraction unit 136, the signal addition units 137L and 137R, and the analog processing unit 150, and the audio output signals AOSL and AOSR are generated. And the speaker units 160L and 160R reproduce audio output according to these audio output signals AOSL and AOSR.
Moreover, when the time point “τinSIDE,END” arrives (refer to
Moreover, at this time, the reproduction control unit 192B sends a third gain command XC3 that specifies the gain hinMID to the third processing unit 213.
For the centrally localized audio component, the third processing unit 213 multiplies the signal M2 by the gain hinMID to obtain the signal MX2, which it sends to the stereo signal extraction unit 136. On the other hand, for the non-centrally localized audio components, the fourth processing unit 214 takes the signal S2 as the signal SX2, which it sends to the stereo signal extraction unit 136.
Subsequently processing is performed by the stereo signal extraction unit 136, the signal addition units 137L and 137R, and the analog processing unit 150, and the audio output signals AOSL and AOSR are generated. And the speaker units 160L and 160R reproduce audio output according to these audio output signals AOSL and AOSR.
Thereafter, when the time point “τinMID,END” arrives (refer to
Moreover, at this time, the reproduction control unit 192B sends a fourth gain command XC4 specifying the gain as “1.0” to the fourth processing unit 214.
For the centrally localized audio component, the third processing unit 213 takes the signal M2 as the signal MX2, which it sends to the stereo signal extraction unit 136. On the other hand, for the non-centrally localized audio components, the fourth processing unit 212 takes the signal S2 as the signal SX2, which it sends to the stereo signal extraction unit 136.
Subsequently processing is performed by the stereo signal extraction unit 136, the signal addition units 137L and 137R, and the analog processing unit 150, and the audio output signals AOSL and AOSR are generated. And the speaker units 160L and 160R reproduce audio output according to these audio output signals AOSL and AOSR.
As explained above, in the second embodiment, in a similar manner to the case with the first embodiment, the signal M1 for the audio component relating to the centrally localized region and the signal 51 for the audio components relating to the non-centrally localized regions are extracted from the musical piece signal MD1, and the signal M2 for the audio component relating to the centrally localized region and the signal S2 for the audio components relating to the non-centrally localized regions are extracted from the musical piece signal MD2.
When performing cross-fade reproduction of the preceding musical piece and the succeeding musical piece, under the control of the reproduction control unit 192B, the cross-fade processing unit 133 performs fade-out processing for the preceding musical piece and performs fade-in processing for the succeeding musical piece, upon each of the audio component signals (M1 and M2) relating to the centrally localized regions and each of the audio component signals (S1 and S2) relating to their non-centrally localized regions.
Moreover, when controlling this fade-out processing and fade-in processing, the reproduction control unit 192B analyzes the tonal characteristics of the final portion of the preceding musical piece and the tonal characteristics of the initial portion of the succeeding musical piece. And, when it has been evaluated that a sense of auditory discomfort may occur while performing the cross-fade reproduction for the audio components relating to the centrally localized regions, but that it is difficult for a sense of auditory discomfort to occur while performing the cross-fade reproduction for the audio components relating to the non-centrally localized regions, then the reproduction control unit 192B makes the first interval length of the first interval from the start of fade-out processing to the end of fade-in processing relating to the centrally localized regions be shorter than the second interval length of the second interval from the start of fade-out processing to the end of fade-in processing relating to the non-centrally localized regions.
Furthermore, during such control, it is arranged for the fade-out processing for the centrally localized region to be ended before the start of the fade-in processing for the centrally localized region, so that cross-fade reproduction is not performed for the centrally localized regions. Moreover, during such control, it is arranged for the fade-out processing for the non-centrally localized regions to be ended after the start of the fade-in processing for the non-centrally localized regions, so that cross-fade reproduction is performed for the non-centrally localized regions.
In this manner, when it is evaluated that a sense of auditory discomfort may occur while performing the cross-fade reproduction for the audio components relating to the centrally localized regions, but that it is difficult for a sense of auditory discomfort to occur while performing the cross-fade reproduction for the audio components relating to the non-centrally localized regions, then cross-fade reproduction is only performed for the audio components relating to the non-centrally localized regions.
Due to this, with the second embodiment, it is possible to avoid failure of the musical harmony when it is evaluated that a sense of auditory discomfort may occur while performing cross-fade reproduction for the audio components relating to the centrally localized regions.
Furthermore, in the second embodiment, when it is evaluated that a sense of auditory discomfort may occur while performing the cross-fade reproduction for the audio components relating to the non-centrally localized regions, but that it is difficult for a sense of auditory discomfort to occur while performing the cross-fade reproduction for the audio components relating to the centrally localized regions, then the reproduction control unit 192B makes the first interval length of the first interval from the start of fade-out processing to the end of fade-in processing relating to the centrally localized regions be longer than the second interval length of the second interval from the start of fade-out processing to the end of fade-in processing relating to the non-centrally localized regions.
Yet further, during such control, it is arranged for the fade-out processing relating to the centrally localized regions to be ended after the start of the fade-in processing relating to the centrally localized regions, so that cross-fade reproduction is performed for the centrally localized regions. Moreover, during such control, it is arranged for the fade-out processing for the non-centrally localized regions to be ended before the start of the fade-in processing for the non-centrally localized regions, so that cross-fade reproduction is not performed for the non-centrally localized regions.
In this manner, when it is evaluated that a sense of auditory discomfort may occur while performing the cross-fade reproduction for the audio components relating to the non-centrally localized regions, but that it is difficult for a sense of auditory discomfort to occur while performing the cross-fade reproduction for the audio components relating to the centrally localized regions, then cross-fade reproduction is only performed for the audio components relating to the centrally localized regions.
Due to the above, in the second embodiment, when performing cross-fade reproduction for the audio components relating to the non-centrally localized regions, it is possible to prevent a failure of musical harmony if it is considered that a sense of auditory discomfort may occur.
Thus, according to the second embodiment, when cross-fade reproduction is performed for musical pieces having a plurality of channels, it is possible to suppress the occurrence of auditory discomfort during the cross-fade interval, in a similar manner to the case with the first embodiment described above.
The present invention is not to be considered as being limited to the embodiments described above; modifications of various kinds are possible to implement thereto.
For example, in the first embodiment described above, it was arranged to perform cross-fade playback both for the audio components relating to the centrally localized regions and for the audio components relating to the non-centrally localized regions. By contrast, it would also be acceptable to arrange to perform cross-fade reproduction only for the audio components relating to the non-centrally localized regions.
Moreover, in the first embodiment described above, for the audio components relating to the centrally localized region, it was arranged for the start time point of the fade-out processing for the preceding musical piece and the start time point of the fade-in processing for the succeeding musical piece to be the same, and also it was arranged for the end time point of the fade-out processing for the preceding musical piece and the end time point of the fade-in processing for the succeeding musical piece to be the same (refer to
Furthermore, in the first embodiment described above, for the audio components relating to the non-centrally localized regions, it was arranged for the start time point of the fade-out processing for the preceding musical piece and the start time point of the fade-in processing for the succeeding musical piece to be the same, and also it was arranged for the end time point of the fade-out processing for the preceding musical piece and the end time point of the fade-in processing for the succeeding musical piece to be the same (refer to
Yet further, in the second embodiment described above, it was arranged for the reproduction control unit to generate the analysis information on the basis of the signals for the audio components relating to the centrally localized regions and on the basis of the signals for the audio components relating to the non-centrally localized regions. By contrast, it would also be acceptable to arrange to generate the analysis information on the basis of the signals for the audio components on the L channel and on the basis of the signals for the audio components on the R channel. Moreover, it would also be acceptable to arrange to generate the analysis information on the basis of the signals for the audio components relating to the centrally localized regions, on the basis of the signals for the audio components relating to the non-centrally localized regions, on the basis of the signals for the audio components on the L channel and on the basis of the signals for the audio components on the R channel.
Even further, in the second embodiment described above, it was arranged to generate the analysis information before performing the cross-fade playback of the musical pieces. By contrast, it would also be acceptable to arrange to generate the analysis information during the cross-fade reproduction of the musical pieces.
Still further, in the second embodiment described above, it was arranged for the reproduction control unit to generate the analysis information. By contrast, it would also be acceptable to arrange for some other device to generate the analysis information, and for the reproduction device to acquire this analysis information that has thus been generated.
Moreover, in the first and second embodiments described above, it was arranged to perform fade-out processing of the preceding musical piece and to perform fade-in processing of the succeeding musical piece by employing gain functions for the signals for the audio components relating to the centrally localized regions and for the signals for the audio components relating to the non-centrally localized regions having shapes as shown in
Furthermore, in the first and second embodiments described above, it was arranged for the reproduction control unit to perform fade-out processing for the preceding musical piece and fade-in processing for the succeeding musical piece automatically upon each of the signals for the audio components relating to the centrally localized regions and the signals for the audio components relating to the non-centrally localized regions. By contrast, it would also be acceptable to arrange for the mode of control for the fade-out processing and the mode of control for the fade-in processing to be performed manually upon each of the signals for the audio components relating to the centrally localized regions and the signals for the audio components relating to the non-centrally localized regions.
In this type of case, it would also be acceptable to arrange for a so-called disk jockey (DJ) to perform input operation for fade-out processing for the preceding musical piece and for fade-in processing for the succeeding musical piece, for each of the signals for the audio components relating to the centrally localized regions and the signals for the audio components relating to the non-centrally localized regions.
Yet further, in the first and second embodiments described above, it would also be acceptable to arrange to add sound effects such as additional reverberation or the like to the signals MX1, SX1, MX2, and SX2 that pass through the cross-fade processing unit.
Still further, in the first and second embodiments described above, it was arranged to perform cross-fade processing for stereo musical pieces in the two-channel stereo format. By contrast, it would also be acceptable to arrange to perform cross-fade processing for multi-channel surround-sound musical pieces. In this case, for example, the center channel signals may be taken as being the audio components of the centrally localized regions. Furthermore, the front left channel signal and the front right channel signal (both together being termed “front channel signals”) may be taken as being signals for audio components relating to first non-centrally localized regions, and the rear left channel signal and the rear right channel signal (both together being termed “rear channel signals”) may be taken as being signals for audio components relating to second non-centrally localized regions.
And it would be possible to arrange to perform fade-out processing and fade-in processing upon each of the audio components for the center channel signals, the audio components for the front channel signals, and the audio components for the rear channel signals. In this case, when the first interval is taken as being from the start of fade-out processing to the end of fade-in processing for the audio component of the center channel signal, the second interval is taken as being from the start of fade-out processing to the end of fade-in processing for the audio component of the front channel signal, and the third interval is taken as being from the start of fade-out processing to the end of fade-in processing for the audio component of the rear channel signal, then it would be possible to arrange for the lengths of the intervals to be shorter in the order: the first interval, the second interval, and the third interval. And, for example, it would be possible to arrange to perform cross-fade reproduction for each of the first interval, the second interval, and the third interval.
It would be acceptable to arrange to build a part or all of the reproduction device described above (except for the audio source unit, the input unit, and the speaker units) as a computer including a central processing unit (CPU: Central Processing Unit) and so on which serves as a calculating device, and to implement the functions of the reproduction device according to one of the embodiments described above by a program that is prepared in advance being executed by that computer. This program could be recorded upon a recording medium capable of being read in by a computer, such as a hard disk, a CD-ROM, a DVD, or the like, and would be read out by the computer from that recording medium and executed. Moreover, it would be possible to arrange for this program to be acquired in the format of being recorded upon a transportable recording medium such as a CD-ROM, a DVD, or the like, or to be acquired in a format of being distributed via a network such as the internet or the like.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2016/073265 | 8/8/2016 | WO | 00 |