The present technology relates to an encoding device and method, a decoding device and method, and a program, and particularly relates to encoding device and method, decoding device and method, and a program, with which sound of an appropriate volume level can be obtained with a smaller quantity of codes.
In the past, according to MPEG (Moving Picture Experts Group) AAC (Advanced sound Coding) (ISO/IEC14496-3:2001) multi-channel sound encoding technology, auxiliary information such as downmix and DRC (Dinamic Range Compression) is recorded in a bitstream, and a reproducing side can use the auxiliary information depending on the environment (for example, see Non-patent Document 1).
By using such auxiliary information, the reproducing side can downmix a sound signal and control the volume to obtain a more appropriate level by DRC.
However, when reproducing a super-multi channel signal such as 11.1 channels (hereinafter channel is sometimes referred to as ch), because the reproducing environment may have various cases such as 2 ch, 5.1 ch, and 7.1 ch, it may be difficult to obtain a sufficient sound pressure or a sound may be clipped with a single downmix coefficient.
For example, in the above-mentioned MPEG AAC, auxiliary information such as downmix and DRC is encoded as gains in an MDCT (Modified Discrete Cosine Transform) domain. Because of this, for example, an 11.1 ch bitstream is reproduced as it is at 11.1 ch or is downmixed to 2 ch and reproduced, whereby the sound pressure level may be decreased or, to the contrary, a large amount may be clipped, and the volume level of the obtained sound may not be appropriate.
Further, if auxiliary information is encoded and transmitted for each reproducing environment, the quantity of codes of a bitstream may be increased.
The present technology has been made in view of the above-mentioned circumstances, and it is an object to obtain sound of an appropriate volume level with a smaller quantity of codes.
According to a first aspect of the present technology, an encoding device includes: a gain calculator that calculates a first gain value and a second gain value for volume level correction of each frame of a sound signal; and a gain encoder that obtains a first differential value between the first gain value and the second gain value, or obtains a second differential value between the first gain value and the first gain value of the adjacent frame or between the first differential value and the first differential value of the adjacent frame, and encodes information based on the first differential value or the second differential value.
The gain encoder may be caused to obtain the first differential value between the first gain value and the second gain value at a plurality of locations in the frame, or obtain the second differential value between the first gain values at a plurality of locations in the frame or between the first differential values at a plurality of locations in the frame.
The gain encoder may be caused to obtain the second differential value based on a gain change point, an inclination of the first gain value or the first differential value in the frame changing at the gain change point.
The gain encoder may be caused to obtain a differential between the gain change point and another gain change point to thereby obtain the second differential value.
The gain encoder may be caused to obtain a differential between the gain change point and a value predicted by first-order prediction based on another gain change point to thereby obtain the second differential value.
The gain encoder may be caused to encode the number of the gain change points in the frame and information based on the second differential value at the gain change points.
The gain encoder may be caused to calculate the second gain value for the each sound signal of the number of different channels obtained by downmixing.
The gain encoder may be caused to select if the first differential value is to be obtained or not based on correlation between the first gain value and the second gain value.
The gain encoder may be caused to variable-length-encode the first differential value or the second differential value.
According to the first aspect of the present technology, an encoding method or a program includes the steps of: calculating a first gain value and a second gain value for volume level correction of each frame of a sound signal; and obtaining a first differential value between the first gain value and the second gain value, or obtaining a second differential value between the first gain value and the first gain value of the adjacent frame or between the first differential value and the first differential value of the adjacent frame, and encoding information based on the first differential value or the second differential value.
According to the first aspect of the present technology, there is calculated a first gain value and a second gain value for volume level correction of each frame of a sound signal; and there is obtained a first differential value between the first gain value and the second gain value, or there is obtained a second differential value between the first gain value and the first gain value of the adjacent frame or between the first differential value and the first differential value of the adjacent frame, and there is encoded information based on the first differential value or the second differential value.
According to a second aspect of the present technology, a decoding device includes: a demultiplexer that demultiplexes an input code string into a gain code string and a signal code string, the gain code string being generated by, with respect to a first gain value and a second gain value for volume level correction calculated for each frame of a sound signal, obtaining a first differential value between the first gain value and the second gain value, or obtaining a second differential value between the first gain value and the first gain value of the adjacent frame or between the first differential value and the first differential value of the adjacent frame, and encoding information based on the first differential value or the second differential value, the signal code string being obtained by encoding the sound signal; a signal decoder that decodes the signal code string; and a gain decoder that decodes the gain code string, and outputs the first gain value or the second gain value for the volume level correction.
The first differential value may be encoded by obtaining a differential value between the first gain value and the second gain value at a plurality of locations in the frame, and the second differential value may be encoded by obtaining a differential value between the first gain values at a plurality of locations in the frame or between the first differential values at a plurality of locations in the frame.
The second differential value may be obtained based on a gain change point, an inclination of the first gain value or the first differential value in the frame changing at the gain change point, whereby the second differential value is encoded.
The second differential value may be obtained based on a differential between the gain change point and another gain change point, whereby the second differential value is encoded.
The second differential value may be obtained based on a differential between the gain change point and a value predicted by first-order prediction based on another gain change point, whereby the second differential value is encoded.
The number of the gain change points in the frame and information based on the second differential value at the gain change points may be encoded as the second differential value.
According to the second aspect of the present technology, a decoding method or a program includes the steps of: demultiplexing an input code string into a gain code string and a signal code string, the gain code string being generated by, with respect to a first gain value and a second gain value for volume level correction calculated for each frame of a sound signal, obtaining a first differential value between the first gain value and the second gain value, or obtaining a second differential value between the first gain value and the first gain value of the adjacent frame or between the first differential value and the first differential value of the adjacent frame, and encoding information based on the first differential value or the second differential value, the signal code string being obtained by encoding the sound signal; decoding the signal code string; and decoding the gain code string, and outputting the first gain value or the second gain value for the volume level correction.
According to the second aspect of the present technology, there is demultiplexed an input code string into a gain code string and a signal code string, the gain code string being generated by, with respect to a first gain value and a second gain value for volume level correction calculated for each frame of a sound signal, obtaining a first differential value between the first gain value and the second gain value, or obtaining a second differential value between the first gain value and the first gain value of the adjacent frame or between the first differential value and the first differential value of the adjacent frame, and encoding information based on the first differential value or the second differential value, the signal code string being obtained by encoding the sound signal; there is decoded the signal code string; and there is decoded the gain code string, and there is output the first gain value or the second gain value for the volume level correction.
According to the first aspect and the second aspect of the present technology, sound of an appropriate volume level can be obtained with a smaller quantity of codes.
Note that the effects described here are not the limitations, but any effect described in the disclosure may be attained.
Hereinafter, with reference to the drawings, embodiments to which the present technology is applied will be described.
<Outline of the Present Technology>
First, the general DRC process of MPEG AAC will be described.
According to the example of
The primary information is main information to configure an output-time-series signal, which is a sound signal encoded based on a scale factor, an MDCT coefficient, or the like. The auxiliary information is secondary information helpful to use an output-time-series signal, which is called as metadata in general, for various purposes. The auxiliary information contains gain information and downmix information.
The downmix information is obtained by encoding, in form of index, a sound signal of a plurality of channels of, for example, 11.1 ch and the like, by using a gain factor, which is used to convert the sound signal into a sound signal of a smaller number of channels. When decoding the sound signal, MDCT coefficients of the channels are multiplied by a gain factor obtained based on the downmix information, and the MDCT coefficients of the respective channels, which are multiplied by the gain factor, are added, whereby an MDCT coefficient of a downmixed output channel is obtained.
Meanwhile, the gain information is obtained by encoding, in form of index, a gain factor, which is used to convert a pair of groups of all the channels or predetermined channels into another signal level. With respect to the gain information, similar to the downmix gain factor, when decoding, MDCT coefficients of the channels are multiplied by a gain factor obtained based on gain information, whereby a DRC-processed MDCT coefficient is obtained.
Next, the decoding process of a bitstream containing the above-mentioned information of
In the decoding device 11 of
The decoder/inverse quantizer circuit 22 decodes and inverse quantizes the signal code string supplied from the demultiplexing circuit 21, and supplies an MDCT coefficient obtained as the result thereof to the gain application circuit 23. Further, the gain application circuit 23 multiplies, based on downmix control information and DRC control information, the MDCT coefficient by gain factors obtained based on the gain information and the downmix information supplied from the demultiplexing circuit 21, and outputs the obtained gain-applied MDCT coefficient.
Here, each of the downmix control information and the DRC control information is information, which is supplied from an upper control apparatus and shows if the downmix or DRC processes are to be performed or not.
The inverse MDCT circuit 24 performs the inverse MDCT process to the gain-applied MDCT coefficient from the gain application circuit 23, and supplies the obtained inverse MDCT signal to the windowing/OLA circuit 25. Further, the windowing/OLA circuit 25 performs windowing and overlap-adding processes to the supplied inverse MDCT signal, and thereby obtains an output-time-series signal, which is output from the decoding device 11 of the MPEG AAC.
As described above, in the MPEG AAC, auxiliary information such as downmix and DRC is encoded as gains in an MDCT domain. Because of this, for example, an 11.1 ch bitstream is reproduced as it is at 11.1 ch or is downmixed to 2 ch and reproduced, whereby the sound pressure level may be decreased or, to the contrary, a large amount may be clipped, and the volume level of the obtained sound may not be appropriate.
For example, according to the MPEG AAC (ISO/IEC14496-3:2001), Matrix-Mixdown process of the section 4.5.1.2.2 describes a downmixing method from 5.1 ch to 2 ch as shown in the following mathematical formula (1).
[Math 1]
Lt=(1/(1+1/sqrt(2)+k))×(L+(1/sqrt(2))×C+k×Sl)
Rt=(1/(1+1/sqrt(2)+k))×(R+(1/sqrt(2))×C+k×Sr) (1)
Note that, in the mathematical formula (1), L, R, C, Sl, and Sr mean a left channel signal, a right channel signal, a center channel signal, a side left channel signal, and a side right channel signal of a 5.1 channel signal, respectively. Further, Lt and Rt mean 2 ch downmixed left channel and right channel signals, respectively.
Further, in the mathematical formula (1), k is a coefficient, which is used to adjust the mixing rate of the side channels, and one of 1/sqrt(2), ½, (½sqrt(2)), and 0 can be selected as the coefficient k.
Here, if signals of all the channels have the maximum amplitudes, the downmixed signal is clipped. In other words, if the amplitudes of the signals of all the L, R, C, Sl, and Sr channels are 1.0, according to the mathematical formula (1), the amplitudes of the Lt and Rt signals are 1.0, irrespective of the k value. In other words, a downmix formula, with which no clip distortion is generated, is assured.
Note that, if the coefficient k=1/sqrt(2), in the mathematical formula (1), the L or R gain is −7.65 dB, the C gain is −10.65 dB, and the Sl or Sr gain is −10.65 dB. So, the signal level is greatly decreased compared to the yet-to-be-downmixed signal level as a tradeoff for generating no clip distortion.
On fears that a signal level may be decreased as described above, in the terrestrial digital broadcasting in Japan employing MPEG AAC, according to the section 6.2.1 (7-1) of the 5.0th edition of the digital broadcasting receiver apparatus standard ARIB (Association of Radio Industries and Business) STD-B21, the downmixing method is described as shown in the following mathematical formula (2).
[Math 2]
Lt=(1/sqrt(2))×(L+(1/sqrt(2))×C+k×Sl)
Rt=(1/sqrt(2))×(R+(1/sqrt(2))×C+k×Sr) (2)
Note that, in the mathematical formula (2), L, R, C, Sl, Sr, Lt, Rt, and k are the same as those of the mathematical formula (1).
In this example, as the coefficient k, similar to that of the mathematical formula (1), one of 1/sqrt(2), ½, (½sqrt(2)), and 0 can be selected.
According to the mathematical formula (2), if k=1/sqrt(2), the L or R gain of the mathematical formula (2) is −3 dB, the C gain is −6 dB, and the Sl or Sr gain is −6 dB, which mean that the difference of the level of the yet-to-be-downmixed signal and the level of the downmixed signal is smaller than that of the mathematical formula (1).
Note that, in this case, if L, R, C, Sl, and Sr are all 1.0, the signal is clipped. However, according to the description of Appendix-4 of ARIB STD-B21 5.0th edition, if this downmix formula is used, a clip distortion is hardly generated in a general signal, and, in case of overflow, if a signal is so-called soft clipped, with which the sign is not inverted, the signal is not greatly distorted audially.
However, the number of channels is 5.1 channels in the above-mentioned example. If 11.1 channels or a larger number of channels are encoded and downmixed, a larger clip distortion is generated and the difference of level is larger.
In view of this, for example, instead of encoding DRC auxiliary information as a gain, a method of encoding an index of a known DRC property may be employed. In this case, when decoding, the DRC process is performed such that the decoded PCM (Pulse Code Modulation) signal, i.e., the above-mentioned output-time-series signal, has the DRC property of the index, whereby it is possible to prevent the sound pressure level from being decreased and prevent clips from being generated due to presence/absence of downmixing.
However, according to this method, a content creator side cannot express the DRC property freely because the decoding device side has DRC property information, and the calculation volume is large because the decoding device side performs the DRC process itself.
Meanwhile, in order to prevent the downmixed signal level from being decreased and prevent a clip distortion from being generated, a method of applying a different DRC gain factor depending on presence/absence of downmixing may be employed.
However, if the number of channels is much larger than the conventional 5.1 channels, the number of patterns of the number of downmixed channels is also increased. For example, in one case, an 11.1 ch signal may be downmixed to 7.1 ch, 5.1 ch, or 2 ch. In order to send a plurality of gains as described above, the quantity of codes is 4 times as large as that of the conventional case.
Further, in recent years, in the field of DRC, a demand for applying DRC coefficients of different ranges depending on listening environments is being increased. For example, the dynamic range required for listening at home is different from the dynamic range required for listening with a mobile terminal, and it is preferable to apply different DRC coefficients. In this case, if DRC coefficients of two different ranges are sent to a decoder side for each downmix case, the quantity of codes is 8 times as large as that when sending one DRC coefficient.
Further, according to a method of encoding one (eight in short window) DRC gain factor(s) for each time frame such as MPEG AAC (ISO/IEC14496-3:2001), the time resolution is inadequate, and the time resolution equal to or less than 1 msec is required. In view of this, it is expected that the number of DRC gain factors may be increased more, and, if simply encoding DRC gain factors by using a known method, the quantity of codes will be about 8 times to several tens of times as large as that of the conventional case.
In view of this, according to the present technology, a content creator at the encoding device side is capable of setting a DRC gain freely, a calculation load at the decoding device is reduced, and, at the same time, the quantity of codes necessary for transmission can be reduced. In other words, according to the present technology, sound of an appropriate volume level can be obtained with a smaller quantity of codes.
<Example of Configuration of Encoding Device>
Next, a specific embodiment, to which the present technology is applied, will be described.
The encoding device 51 of
The first sound pressure level calculation circuit 61 calculates, based on an input time-series signal, i.e., a supplied multi-channel sound signal, the sound pressure levels of the channels of the input time-series signal, and obtains the representative values of the sound pressure levels of the channels as first sound pressure levels.
For example, a method of calculating a sound pressure level is based on the maximum value, the RMS (Root Mean Square), or the like of a sound signal for each channel of the input time-series signal of each time frame, and a sound pressure level is obtained for each channel configuring the input time-series signal for each time frame of the input time-series signal.
Further, as a method of calculating a representative value, i.e., a first sound pressure level, for example, a method of employing the maximum value of the sound pressure levels of each channel as a representative value, a method of calculating one representative value based on the sound pressure levels of each channel by using a predetermined calculation formula, or the like may be employed. Specifically, for example, a representative value can be calculated by using the loudness calculation formula described in ITU-R BS.1770-2 (March 2011).
Note that the representative value of sound pressure levels is obtained for each time frame of an input time-series signal. Further, the time frame, i.e., a unit to be processed by the first sound pressure level calculation circuit 61, is synchronized with a time frame of an input time-series signal processed by the below-described signal encoding circuit 67, and is a time frame equal to or shorter than the time frame processed by the signal encoding circuit 67.
The first sound pressure level calculation circuit 61 supplies the obtained first sound pressure level to the first gain calculation circuit 62. The first sound pressure level obtained as described above shows the representative sound pressure level of the channel of the input time-series signal, which contains sound signals of a predetermined number of channels such as 11.1 ch, for example.
The first gain calculation circuit 62 calculates a first gain based on the first sound pressure level supplied from the first sound pressure level calculation circuit 61, and supplies the first gain to the gain encoding circuit 66.
Here, the first gain shows a gain, which is used to correct the volume level of the input time-series signal, in order to obtain a sound having an appropriate volume level when the decoding device side reproduces an input time-series signal. In other words, if the input time-series signal is not downmixed, by correcting the volume level of the input time-series signal based on the first gain, the reproducing side is capable of obtaining a sound having an appropriate volume level.
There are various methods of obtaining a first gain, and, for example, the DRC properties of
Note that, in
Each of the polygonal line C1 and the polygonal line C2 shows the relation of input/output sound pressure levels. For example, according to the DRC property of the polygonal line C1, if a first sound pressure level of 0 dBFS is input, the volume level is corrected, whereby the sound pressure level of the input time-series signal becomes −27 dBFS. So, in this case, the first gain is −27 dBFS.
Meanwhile, for example, according to the DRC property of the polygonal line C2, if a first sound pressure level of 0 dBFS is input, the volume level is corrected, whereby the sound pressure level of the input time-series signal becomes −21 dBFS. So, in this case, the first gain is −21 dBFS.
Hereinbelow, the mode in which a volume level is corrected based on the DRC property of the polygonal line C1 will be referred to as DRC_MODE1. Further, the mode in which a volume level is corrected based on the DRC property of the polygonal line C2 will be referred to as DRC_MODE2.
The first gain calculation circuit 62 determines a first gain based on the DRC property of a specified mode such as DRC_MODE1 and DRC_MODE2. The first gain is output as a gain waveform, which is in sync with the time frame of the signal encoding circuit 67. In other words, the first gain calculation circuit 62 calculates a first gain for each sample of a time frame of the input time-series signal processed.
With reference to
Note that the downmixing circuit 63 may output one downmix signal or may output a plurality of downmix signals. For example, an input time-series signal of 11.1 ch is downmixed, and a downmix signal of a sound signal of 2 ch, a downmix signal of a sound signal of 5.1 ch, and a downmix signal of a sound signal of 7.1 ch may be generated.
The second sound pressure level calculation circuit 64 calculates a second sound pressure level based on a downmix signal, i.e., a multi-channel sound signal supplied from the downmixing circuit 63, and supplies the second sound pressure level to the second gain calculation circuit 65.
The second sound pressure level calculation circuit 64 uses the method the same as the method of calculating the first sound pressure level by the first sound pressure level calculation circuit 61, and calculates a second sound pressure level for each downmix signal.
The second gain calculation circuit 65 calculates a second gain of the second sound pressure level of each downmix signal supplied from the second sound pressure level calculation circuit 64 for each downmix signal based on the second sound pressure level, and supplies the second gain to the gain encoding circuit 66.
Here, the second gain calculation circuit 65 calculates the second gain based on the DRC property and the gain calculation method that the first gain calculation circuit 62 uses.
In other words, the second gain shows a gain, which is used to correct the volume level of the downmix signal, in order to obtain a sound having an appropriate volume level when the decoding device side downmixes and reproduces an input time-series signal. In other words, if the input time-series signal is downmixed, by correcting the volume level of the obtained downmix signal based on the second gain, a sound having an appropriate volume level can be obtained.
Such a second gain can be a gain used to correct the volume level of a sound based on the DRC property to thereby obtain a more appropriate volume level, and, in addition, used to correct the sound pressure level, which is changed when it is downmixed.
Here, an example of a method of obtaining a gain waveform of a first gain or a second gain by each of the first gain calculation circuit 62 and the second gain calculation circuit 65 will be described specifically.
The gain waveform g(k, n) of the time frame k can be obtained based on calculation of the following mathematical formula (3).
[Math 3]
g(k,n)=A×Gt(k)+(1−A)×g(k,n−1) (3)
Note that, in the mathematical formula (3), n is a time sample having a value of 0 to N−1, where N is the time frame length, and Gt(k) is a target gain of the time frame k.
Further, in the mathematical formula (3), A is a value determined based on the following mathematical formula (4).
[Math 4]
A=1−exp(−1/(2×Fs×Tc(k)) (4)
In the mathematical formula (4), Fs is a sampling frequency (Hz), Tc(k) is a time constant of the time frame k, and exp(x) is an exponential function.
Further, in the mathematical formula (3), as g(k, n−1) where n=0, the terminal gain value g(k−1, N−1) of the previous time frame is used.
First, Gt(k) can be obtained based on a first sound pressure level or a second sound pressure level obtained by the above-mentioned first sound pressure level calculation circuit 61 or second sound pressure level calculation circuit 64, and based on the DRC properties of
For example, if the DRC_MODE2 property of
As a general feature of the DRC, a large sound pressure level is input and a gain is thereby decreased, which is called as an attack, and it is known that a shorter time constant is employed because the gain is decreased sharply. Meanwhile, a relatively small sound pressure level is input and a gain is thereby returned, which is called as a release, and it is known that a longer time constant is employed because the gain is returned slowly in order to reduce a sound wobble.
In general, the time constant is different depending on a desired DRC property. For example, a shorter time constant is set for an apparatus that records/reproduces human voices such as a voice recorder, and, to the contrary, a longer release time constant is set for an apparatus that records/reproduces music such as a portable music player, in general. In this example described here, to make the description simple, if Gt(k)−g(k−1, N−1) is less than zero, the time constant as an attack is 20 msec, and if it is equal to or larger than zero, the time constant as a release is 2 sec.
As described above, according to the calculation based on the mathematical formula (3), the gain waveform g(k, n) as a first gain or a second gain can be obtained.
With reference to
Here, when encoding the first gain and the second gain, the differential between those gains of the same time frame, the differential between the same gain of different time frames, or the differential between the different gains of the same (corresponding) time frame is arbitrarily calculated and encoded. Note that the differential between the different gains means the differential between the first gain and the second gain, or the differential between the different second gains.
The signal encoding circuit 67 encodes the supplied input time-series signal based on a predetermined encoding method, for example, a general encoding method such as an encoding method of MEPG AAC, and supplies a signal code string obtained as the result thereof to the multiplexing circuit 68. The multiplexing circuit 68 multiplexes the gain code string supplied from the gain encoding circuit 66, downmix information supplied from an upper control apparatus, and the signal code string supplied from the signal encoding circuit 67, and outputs an output code string obtained as the result thereof.
<First Gain and Second Gain>
Here, examples of the first gain and the second gain supplied to the gain encoding circuit 66 and the gain code string output from the gain encoding circuit 66 will be described.
For example, let's say that the gain waveforms of
In the example of
Further, the polygonal line C23 shows the differential between the first gain and the second gain.
Because the correlation of the first gain and the second gain is high as apparent from the polygonal line C21 to the polygonal line C23, they are encoded by using the correlation thereof more efficiently than encoding them independently. In view of this, the encoding device 51 obtains the differential between two gains out of gain information such as the first gain and the second gain, and encodes the differential and one of the gains, whose differential has been obtained, efficiently.
Hereinbelow, out of gain information such as the first gain or the second gain, primary gain information, from which other gain information is subtracted, will be sometimes referred to as a master gain sequence, and gain information, which is subtracted from the master gain sequence, will be sometimes referred to as a slave gain sequence. Further, the master gain sequence and the slave gain sequence will be referred to as a gain sequence if they are not distinguished from each other.
<Output Code String>
Further, in the above-mentioned example, the first gain is the gain of the input time-series signal of 11.1 ch, and the second gain is the gain of the downmix signal of 5.1 ch. In order to describe the relation between the master gain sequence and the slave gain sequence in detail, description will be made below on the assumption that, further, the gain of downmix signal of 7.1 ch and the gain of downmix signal of 2 ch are obtained by downmixing the input time-series signal of 11.1 ch. In other words, both the 7.1 ch gain and the 2 ch gain are the second gains obtained by the second gain calculation circuit 65. So, in this example, the second gain calculation circuit 65 calculates three second gains.
In this example, GAIN_SEQ0 shows the first gain of the gain sequence of 11.1 ch, i.e., the undownmixed input time-series signal of 11.1 ch. Further, GAIN_SEQ1 shows the gain sequence of 7.1 ch, i.e., the second gain of the downmix signal of 7.1 ch obtained as the result of downmixing.
Further, GAIN_SEQ2 shows the gain sequence of 5.1 ch, i.e., the second gain of the downmix signal of 5.1 ch, and GAIN_SEQ3 shows the gain sequence of 2 ch, i.e., the second gain of the downmix signal of 2 ch.
Further, in
In terms of the time frame J, in the time frame J, the gain sequences of 11.1 ch are the master gain sequences. Further, the other gain sequences of 7.1 ch, 5.1 ch, and 2 ch are the slave gain sequences for the gain sequences of 11.1 ch.
So, in the time frame J, the gain sequences of 11.1 ch, i.e., the master gain sequences, are encoded as they are. Further, the differentials between the master gain sequences and the gain sequences of 7.1 ch, 5.1 ch, and 2 ch, i.e., the slave gain sequences, are obtained, and the differentials are encoded. The information obtained by encoding the gain sequences as described above is treated as gain code string.
Further, in the time frame J, information showing the gain encoding mode, i.e., the relation between the master gain sequences and the slave gain sequences, is encoded, the gain encoding mode header HD11 is thus obtained, and the gain encoding mode header HD11 and the gain code string are added to an output code string.
If the gain encoding mode of the processed time frame is different from the gain encoding mode of the previous time frame, the gain encoding mode header is generated and is added to the output code string.
So, because the gain encoding mode of the time frame J is the same as the gain encoding mode of the time frame J+1, which is the frame next to the time frame J, the gain encoding mode header of the time frame J+1 is not encoded.
To the contrary, because the correspondence relation between the master gain sequences and the slave gain sequences of the time frame K is changed and the gain encoding mode is different from that of the previous time frame, the gain encoding mode header HD12 is added to an output code string.
In this example, the gain sequence of 11.1 ch is the master gain sequence, and the gain sequence of 7.1 ch is the slave gain sequence for the gain sequence of 11.1 ch. Further, the gain sequence of 5.1 ch is the second master gain sequence, and the gain sequence of 2 ch is the slave gain sequence for the gain sequence of 5.1 ch.
Next, an example of the bitstreams output from the encoding device 51 if the gain encoding modes are changed depending on the time frames as shown in
For example, as shown in
For example, in the time frame J, the gain encoding mode header corresponding to the gain encoding mode header HD11 of
Here, in the example of
Further, the output code string of the time frame J contains the signal code string as the primary information.
In the time frame J+1 next to the time frame J, because the gain encoding mode is not changed, the auxiliary information contains no gain encoding mode header, and the output code string contains the gain code string and the downmix information as the auxiliary information and the signal code string as the primary information.
In the time frame K, because the gain encoding mode is changed again, the output code string contains the gain encoding mode header, the gain code string, and the downmix information as the auxiliary information, and the signal code string as the primary information.
Further, hereinafter, the gain encoding mode header and the gain code string of
The gain encoding mode header contained in the output code string has the configuration of
The gain encoding mode header of
GAIN_SEQ_NUM shows the number of the encoded gain sequences, and in the example of
The data of each gain sequence mode of each of GAIN_SEQ0 to GAIN_SEQ3 has the configuration of
The data of the gain sequence mode contains MASTER_FLAG, DIFF_SEQ_ID, DMIX_CH_CFG_ID, and DRC_MODE_ID, and each of the four elements is encoded and thereby has 4 bits.
MASTER_FLAG is an identifier that shows if the gain sequence described in the data of the gain sequence mode is the master gain sequence or not.
For example, if the MASTER_FLAG value is “1”, then it means that the gain sequence is the master gain sequence, and if the MASTER_FLAG value is “0”, then it means that the gain sequence is the slave gain sequence.
DIFF_SEQ_ID is an identifier showing the master gain sequence, the differential between the master gain sequence and the gain sequence, which is described in the data of the gain sequence mode, being to be calculated, and is read out if MASTER_FLAG value is “0”.
DMIX_CH_CFG_ID is configuration information of the channel corresponding to the gain sequence, i.e., information showing the number of channels of multi-channel sound signals of 11.1 ch, 7.1 ch, or the like, for example.
DRC_MODE_ID is an identifier showing the property of the DRC, which is used to calculate a gain by the first gain calculation circuit 62 or the second gain calculation circuit 65, and, in the example of
Note that, DRC_MODE_ID of the master gain sequence is sometimes different from DRC_MODE_ID of the slave gain sequence. In other words, a differential between gain sequences, the gains of which are obtained based on different DRC properties, is sometimes obtained.
Here, for example, in the time frame J of
Further, in this gain sequence mode, MASTER_FLAG is 1, DIFF_SEQ_ID is 0, DMIX_CH_CFG_ID is an identifier showing 11.1 ch, DRC_MODE_ID is an identifier showing DRC_MODE1, for example, and the gain sequence mode is encoded.
Similarly, in GAIN_SEQ1 that stores information of the gain sequence of 7.1 ch, MASTER_FLAG is 0, DIFF_SEQ_ID is 0, DMIX_CH_CFG_ID is an identifier showing 7.1 ch, DRC_MODE_ID is an identifier showing DRC_MODE1, for example, and the gain sequence mode is encoded.
Further, in GAIN_SEQ2, MASTER_FLAG is 0, DIFF_SEQ_ID is 0, DMIX_CH_CFG_ID is an identifier showing 5.1 ch, DRC_MODE_ID is an identifier showing DRC_MODE1, for example, and the gain sequence mode is encoded.
Further, in GAIN_SEQ3, MASTER_FLAG is 0, DIFF_SEQ_ID is 0, DMIX_CH_CFG_ID is an identifier showing 2 ch, DRC_MODE_ID is an identifier showing DRC_MODE1, for example, and the gain sequence mode is encoded.
Further, as described above, on and after the time frame J+1, if the correspondence relation of the master gain sequence and the slave gain sequence is not changed, no gain encoding mode header is inserted in the bit stream.
Meanwhile, if the correspondence relation of the master gain sequence and the slave gain sequence is changed, the gain encoding mode header is encoded.
For example, in the time frame K of
So, although the GAIN_SEQ0 and the GAIN_SEQ1 of the gain encoding mode header of the time frame K are the same as those of the time frame J, the GAIN_SEQ2 and the GAIN_SEQ3 are changed.
In other words, in GAIN_SEQ2, MASTER_FLAG is 1, DIFF_SEQ_ID is 0, DMIX_CH_CFG_ID is an identifier showing 5.1 ch, and DRC_MODE_ID is an identifier showing DRC_MODE1, for example. Further, in GAIN_SEQ3, MASTER_FLAG is 0, DIFF_SEQ_ID is 2, DMIX_CH_CFG_ID is an identifier showing 2 ch, and DRC_MODE_ID is an identifier showing DRC_MODE1, for example. Here, with regard to the gain sequence of 5.1 ch as the master gain sequence, it is not necessary to read DIFF_SEQ_ID, and therefore DIFF_SEQ_ID may be an arbitrary value.
Further, the gain code string contained in the auxiliary information of the output code string of
In the gain code string of
hld_mode arranged next to GAIN_SEQ_NUM is a flag showing if the gain of the previous time frame in terms of time is to be held or not, which is encoded and has 1 bit. Note that, in
For example, if the hld_mode value is 1, the gain of the previous time frame, i.e., for example, the first gain or the second gain obtained by decoding, is used as the gain of the current time frame as it is. So, in this case, it means that the differential between the first gains or the second gains of different time frames is obtained, and they are thus encoded.
Meanwhile, if the hld_mode value is 0, the gain, which is obtained based on the information described on and after hld_mode, is used as the gain of the current time frame.
If the hld_mode value is 0, next to hld_mode, cmode is described in 2 bits, and gpnum is described in 6 bits.
cmode is an encoding method, which is used to generate a gain waveform from a gain change point to be encoded on and after that.
Specifically, the lower 1 bit of cmode shows the differential encoding mode at the gain change point. Specifically, if the value of the lower 1 bit of cmode is 0, then it means that the gain encoding method is the 0-order prediction differential mode (hereinafter sometimes referred to as DIFF1 mode), and if the value of the lower 1 bit of cmode is 1, then it means that the gain encoding method is the first-order prediction differential mode (hereinafter sometimes referred to as DIFF2 mode).
Here, the gain change point means the time at which, in a gain waveform containing gains at times (samples) in a time frame, the inclination of the gain after the time is changed from the inclination of the gain before the time. Note that, hereinafter, description will be made on the assumption that times (samples) are predetermined as candidate points for a gain change point, and the candidate point at which the inclination of the gain after the candidate point is changed from the inclination of the gain before the candidate point, out of the candidate points, is determined as the gain change point. Further, if the processed gain sequence is a slave gain sequence, the gain change point is the time at which, in a gain differential waveform with respect to a master gain sequence, the inclination of the gain (differential) after the time is changed from the inclination of the gain (differential) before the time.
The 0-order prediction differential mode means a mode of, in order to encode a gain waveform containing gains at times, i.e., at samples, obtaining a differential between the gain at each gain change point and the gain at the previous gain change point, and thereby encoding the gain waveform. In other words, the 0-order prediction differential mode means a mode of, in order to decode a gain waveform, decoding the gain waveform by using a differential between the gain at each time and the gain of another time.
To the contrary, the first-order prediction differential mode means a mode of, in order to encode a gain waveform, predicting the gain of each gain change point based on a linear function through the previous gain change point, i.e., the first-order prediction, obtaining the differential between the predicted value (first-order predicted value) and the real gain, and thereby encoding the gain waveform.
Meanwhile, the upper 1 bit of cmode shows if the gain at the beginning of a time frame is to be encoded or not. Specifically, if the upper 1 bit of cmode is 0, the gain at the beginning of a time frame is encoded to have the fixed length of 12 bits, and it is described as gval_abs_id0 of
MSB1 bit of gval_abs_id0 is a sign bit, and the remaining 11 bits show the value (gain) of “gval_abs_id0” determined based on the following mathematical formula (5) by 0.25 dB steps.
[Math 5]
gain_abs_linear=2^((0x7FF&gval_abs_id0)/24) (5)
Note that, in the mathematical formula (5), gain_abs_linear shows a gain of a linear value, i.e., a first gain or a second gain as a gain of a master gain sequence, or the differential between the gain of a master gain sequence and the gain of a slave gain sequence. Here, gain_abs_linear is a gain at the sample location at the beginning of the time frame. Further, in the mathematical formula (5), “^” means power.
Further, if the upper 1 bit of cmode is 1, then it means that the gain value at the end of the previous time frame when decoding is treated as the gain value at the beginning of the current time frame.
Further, in
Further, in the gain code string, gloc_id[k] and gval_diff_id[k] are described next to gpnum or gval_abs_id0, the number of gloc_id[k] and gval_diff_id[k] being the same as the number of the gain change points of gpnum.
Here, gloc_id[k] and gval_diff_id[k] show a gain change point and an encoded gain at the gain change point. Note that k of gloc_id[k] and gval_diff_id[k] is an index identifying a gain change point, and shows the order at the gain change point.
In this example, gloc_id[k] is described in 3 bits, and gval_diff_id[k] is described in any one of 1 bit to 11 bits. Note that, in
Here, the 0-order prediction differential mode (DIFF1 mode) and the first-order prediction differential mode (DIFF2 mode) will be described more specifically.
First, with reference to
In
Further, in this example, the two gain change points G11 and G12 are detected in the processed time frame J, and PREV11 shows the beginning location of the time frame J, i.e., the end location of the time frame J−1.
First, the location gloc[0] at the gain change point G11 is encoded and has 3 bits as location information showing the time sample value from the beginning of the time frame J.
Specifically, the gain change point is encoded based on the table of
In
In this example, 0, 16, 32, 64, 128, 256, 512, and 1024th samples from the beginning of the time frame, the samples being unequally-spaced in the time frame, are candidate points for the gain change point.
So, for example, if the gain change point G11 is the sample at the location of 512th from the sample at the beginning of the time frame J, the gloc_id value “6” corresponding to gloc[gloc_id]=512 is described in the gain code string as gloc_id[0], which shows the location at the gain change point of k=0th.
With reference to
For example, the differential between the gain value gval[0] at the gain change point G11 and the gain value of the beginning location PREV11 is encoded based on the encoding table (code book) of
In this example, “1” is described as gval_diff_id[k] if the differential between the gain values is 0, “01” is described as gval_diff_id[k] if the differential between the gain values is +0.1, and “001” is described as gval_diff_id[k] if the differential between the gain values is +0.2.
Further, if the differential between the gain values is +0.3 or more or 0 or less, as gval_diff_id[k], a code “000” is described, and a fixed length code of 8 bits showing the differential between the gain values is described next to the code.
As described above, the location and the gain value at the first gain change point G11 are encoded, and subsequently, the differential between the location of the next gain change point G12 and that of the previous gain change point G11 and the differential between the gain value of the next gain change point G12 and that of the previous gain change point G11 are encoded.
In other words, location gloc[1] at the gain change point G12 is encoded to have 3 bits based on the table of
Further, the differential between the gain value gval[1] at the gain change point G12 and the gain value gval[0] at the gain change point G11 is encoded to have a variable length code of 1 bit to 11 bits based on the encoding table of
Note that the gloc table may not be limited to the table of
Next, with reference to
In
Further, in this example, the two gain change points G21 and G22 are detected in the processed time frame J, and PREV21 shows the beginning location of the time frame J.
First, the location gloc[0] at the gain change point G21 is encoded and has 3 bits as location information showing the time sample value from the beginning of the time frame J. This encoding is similar to the process at the gain change point G11 described with reference to
Next, the differential between the gain value gval[0] at the gain change point G21 and the first-order predicted value of the gain value gval[0] is encoded.
Specifically, the gain waveform of the time frame J−1 is extended from the beginning location PREV21 of the time frame J, and the point P11 at the location gloc[0] on the extended line is obtained. Further, the gain value at the point P11 is treated as the first-order predicted value of the gain value gval[0].
In other words, the straight line through the beginning location PREV21, the inclination thereof being the same as that of the end portion of the gain waveform in the time frame J−1, is treated as the straight line obtained by extending the gain waveform of the time frame J−1, and the first-order predicted value of the gain value gval[0] is calculated by using the linear function showing the straight line.
Further, the differential between the thus obtained first-order predicted value and the real gain value gval[0] is obtained, and the differential is encoded to have a variable length code from 1 bit to 11 bits based on the encoding table of
Subsequently, the differential between the location of the next gain change point G22 and that of the previous gain change point G21 and the differential between the gain value of the next gain change point G22 and that of the previous gain change point G21 are encoded.
In other words, location gloc[1] at the gain change point G22 is encoded to have 3 bits based on the table of
Further, the differential between the gain value gval[1] at the gain change point G22 and the first-order predicted value of the gain value gval[1] is encoded.
Specifically, the inclination used to obtain the first-order predicted value is updated with the inclination of the straight line connecting (through) the beginning location PREV21 and the previous gain change point G21, and the point P12 at the location gloc[1] on the straight line is obtained. Further, the gain value at the point P12 is treated as the first-order predicted value of the gain value gval[1].
In other words, the first-order predicted value of the gain value gval[1] is calculated by using the linear function showing the straight line through the previous gain change point G21 having the updated inclination. Further, the differential between the thus obtained first-order predicted value and the real gain value gval[1] is obtained, and the differential is encoded to have a variable length code from 1 bit to 11 bits based on the encoding table of
As described above, the gain of each gain sequence is encoded for each time frame. However, the encoding table, which is used to variable-length-encode the gain value at each gain change point, is not limited to the encoding table of
Specifically, as an encoding table for variable-length-encoding, different encoding tables may be used depending on the number of downmix channels, the difference of the above-mentioned DRC properties of
Here, for example, a method of configuring an encoding table utilizing the DRC and the general human auditory property will be described. It is necessary to reduce the gain to obtain the desired DRC property if a loud sound is input, and to return the gain if no loud sound is input after that.
In general, the former is called as an attack, and the latter is called as a release. According to the human auditory property, sound becomes unstable and a person may hear a sound wobble, which is inconvenient, unless increasing the speed of the attack and largely decreasing the speed of the release than the speed of the attack.
In view of such a property, the differential between DRC gains of time frames corresponding to the above-mentioned 0-order prediction differential mode is obtained by using the generally-used attack/release DRC property, and the waveform of
Note that, in
In general, the probability density distribution of such time frame differentials is as shown in the distribution of
According to the probability density distribution of
In this example, the property between time frames has been described. However, the property between samples (times) in a time frame is similar to the property between time frames.
Such a probability density distribution is changed depending on the 0-order prediction differential mode or the first-order prediction differential mode with which encoding is performed and content of a gain encoding mode header. So by configuring a variable length code table depending thereon, it is possible to encode gain information efficiently.
In the above, an example of a method of extracting gain change points from a gain waveform of a master gain sequence and a slave gain sequence, obtaining the differential, encoding the differential by using a variable length code, and thereby compressing a gain efficiently has been described. In an application example in which a relatively high bit rate is allowed and high accuracy of a gain waveform is required instead thereof, as a matter of course, it is also possible to obtain a differential between a master gain sequence and a slave gain sequence and to directly encode gain waveforms thereof. At this time, because a gain waveform shows time-series discrete signals, it is possible to encode the gain waveform by using a generally-known lossless compression method for time-series signals.
<Description of Encoding Process>
Next, behaviors of the encoding device 51 will be described.
When an input time-series signal of 1 time frame is supplied to the encoding device 51, the encoding device 51 encodes the input time-series signal and outputs an output code string, i.e., performs the encoding process. Hereinafter, with reference to the flowchart of
In Step S11, the first sound pressure level calculation circuit 61 calculates the first sound pressure level of the input time-series signal based on the supplied input time-series signal, and supplies the first sound pressure level to the first gain calculation circuit 62.
In Step S12, the first gain calculation circuit 62 calculates the first gain based on the first sound pressure level supplied from the first sound pressure level calculation circuit 61, and supplies the first gain to the gain encoding circuit 66. For example, the first gain calculation circuit 62 calculates the first gain based on the DRC property of the mode specified by an upper control apparatus such as DRC_MODE1 and DRC_MODE2.
In Step S13, the downmixing circuit 63 downmixes the supplied input time-series signal by using downmix information supplied from an upper control apparatus, and supplies the downmix signal obtained as the result thereof to the second sound pressure level calculation circuit 64.
In Step S14, the second sound pressure level calculation circuit 64 calculates a second sound pressure level based on a downmix signal supplied from the downmixing circuit 63, and supplies the second sound pressure level to the second gain calculation circuit 65.
In Step S15, the second gain calculation circuit 65 calculates a second gain of the second sound pressure level supplied from the second sound pressure level calculation circuit 64 for each downmix signal, and supplies the second gain to the gain encoding circuit 66.
In Step S16, the gain encoding circuit 66 performs the gain encoding process to thereby encode the first gain supplied from the first gain calculation circuit 62 and the second gain supplied from the second gain calculation circuit 65. Further, the gain encoding circuit 66 supplies the gain encoding mode header and the gain code string obtained as the result of the gain encoding process to the multiplexing circuit 68.
Note that the gain encoding process will be described later in detail. In the gain encoding process, with respect to gain sequences such as the first gain and the second gain, the differential between gain sequences, the differential between time frames, or the differential in a time frame is obtained and encoded. Further, a gain encoding mode header is generated only when necessary.
In Step S17, the signal encoding circuit 67 encodes the supplied input time-series signal based on a predetermined encoding method, and supplies a signal code string obtained as the result thereof to the multiplexing circuit 68.
In Step S18, the multiplexing circuit 68 multiplexes the gain encoding mode header and the gain code string supplied from the gain encoding circuit 66, downmix information supplied from an upper control apparatus, and the signal code string supplied from the signal encoding circuit 67, and outputs an output code string obtained as the result thereof. In this manner, the output code string of 1 time frame is output as a bitstream, and then the encoding process is finished. Then the encoding process of the next time frame is performed.
As described above, the encoding device 51 calculates the first gain of the yet-to-be-downmixed original input time-series signal and the second gain of the downmixed downmix signal, and arbitrarily obtains and encodes the differential between those gains. As a result, sound of an appropriate volume level can be obtained with a smaller quantity of codes.
In other words, because the encoding device 51 side can set the DRC property freely, the decoder side can obtain a sound having a more appropriate volume level. Further, by obtaining and efficiently encoding the differential between gains, it is possible to transmit more information with a smaller quantity of codes, and to reduce the calculation load of the decoding device side.
<Description of Gain Encoding Process>
Next, with reference to the flowchart of
In Step S41, the gain encoding circuit 66 determines the gain encoding mode based on an instruction from an upper control apparatus. In other words, with respect to each gain sequence, a master gain sequence or a slave gain sequence as the gain sequence, the gain sequence whose differential with the gain sequence, i.e., a slave gain sequence, is to be calculated, and the like are determined.
Specifically, the gain encoding circuit 66 actually calculates the differential between gains (first gains or second gains) of each gain sequence, and obtains a correlation of the gains. Further, the gain encoding circuit 66 treats, as a master gain sequence, a gain sequence whose gain correlations with the other gain sequences are high (differentials between gains are small) based on the differentials between the gains, for example, and treats the other gain sequences as slave gain sequences.
Note that all the gain sequences may be treated as master gain sequences.
In Step S42, the gain encoding circuit 66 determines if the gain encoding mode of the processed current time frame is the same as the gain encoding mode of the previous time frame or not.
If it is determined that they are not the same in Step S42, in Step S43, the gain encoding circuit 66 generates a gain encoding mode header, and adds the gain encoding mode header to auxiliary information. For example, the gain encoding circuit 66 generates the gain encoding mode header of
After the gain encoding mode header is generated in Step S43, then the process proceeds to Step S44.
Further, if it is determined that the gain encoding mode is the same in Step S42, no gain encoding mode header is added to the output code string, therefore the process of Step S43 is not performed, and the process proceeds to Step S44.
If a gain encoding mode header is generated in Step S43, or if it is determined that the gain encoding mode is the same in Step S42, the gain encoding circuit 66 obtains the differential between the gain sequences depending on the gain encoding mode in Step S44.
For example, let's say that a 7.1 ch gain sequence as a second gain is a slave gain sequence, and a master gain sequence corresponding to the slave gain sequence is an 11.1 ch gain sequence as a first gain.
In this case, the gain encoding circuit 66 obtains the differential between the 7.1 ch gain sequence and the 11.1 ch gain sequence. Note that, at this time, a differential between the 11.1 ch gain sequence as the master gain sequence is not calculated, and the 11.1 ch gain sequence is encoded as it is in the later process.
As described above, by obtaining a differential between gain sequences, the differential between the gain sequences is obtained and the gain sequence is encoded.
In Step S45, the gain encoding circuit 66 selects one gain sequence as a processed gain sequence, and determines if the gains are constant in the gain sequence or not, and if the gains are the same as the gains of the previous time frame or not.
For example, let's say that, in the time frame J, the 11.1 ch gain sequence as a master gain sequence is selected as a processed gain sequence. In this case, if the gains (first gains or second gains) of the samples of the 11.1 ch gain sequence in the time frame J are approximately constant values, the gain encoding circuit 66 determines that the gains are constant in the gain sequence.
Further, if the differentials between the gains at the respective samples of the 11.1 ch gain sequence in the time frame J and the gains at the respective samples of the 11.1 ch gain sequence in the time frame J−1, i.e., the previous time frame, are approximately 0, the gain encoding circuit 66 determines that the gains are the same as those in the previous time frame.
Note that, if the processed gain is the slave gain sequence, it is determined if the differentials between the gains obtained in Step S44 are constant in a time frame or not, and if the differentials are the same as the differentials between the gains in the previous time frame or not.
If it is determined that the gains are constant in a gain sequence and that the gains are the same as the gains in the previous time frame in Step S45, the gain encoding circuit 66 sets the value 1 as hld_mode in Step S46, and the process proceeds to Step S51. In other words, 1 is described as hld_mode in the gain code string.
If it is determined that the gains are constant in a gain sequence and that the gains are the same as the gains in the previous time frame, the gains are not changed in the previous time frame and in the current time frame, and therefore the decoder side uses the gain in the previous time frame as it is and decodes the gain. So, in this case, it is understood that the differential between the time frames is obtained and the gain is encoded.
To the contrary, if it is determined that the gains are not constant in a gain sequence and that the gains are not the same as the gains in the previous time frame in Step S45, the gain encoding circuit 66 sets the value 0 as hld_mode in Step S47. In other words, 0 is described as hld_mode in the gain code string.
In Step S48, the gain encoding circuit 66 extracts gain change points of the processed gain sequence.
For example, as described above with reference to
Note that, more specifically, if the processed gain sequence is a slave gain sequence, a gain change point is extracted from the time waveform, which shows the gain differential between the processed gain sequence and the master gain sequence obtained for the gain sequence.
After the gain encoding circuit 66 extracts gain change points, the gain encoding circuit 66 describes the number of the extracted gain change points as gpnum in the gain code string of
In Step S49, the gain encoding circuit 66 determines cmode.
For example, the gain encoding circuit 66 actually encodes the processed gain sequence by using the 0-order prediction differential mode and by using the first-order prediction differential mode, and selects one differential encoding mode, with which the quantity of codes obtained as the result of encoding is smaller. Further, the gain encoding circuit 66 determines if the gain at the beginning of the time frame is to be encoded or not based on an instruction from an upper control apparatus, for example. As a result, cmode is determined.
After cmode is determined, the gain encoding circuit 66 describes a value showing the determined cmode in the gain code string of
To the contrary, if the upper 1 bit of cmode is 1, decoding is performed where the gain value at the end of the previous time frame is used as the gain value at the beginning of the current time frame, and therefore it means that the differential between the time frames is obtained and encoded.
In Step S50, the gain encoding circuit 66 encodes the gains at the gain change points extracted in Step S48 by using the differential encoding mode selected in the process of Step S49. Further, the gain encoding circuit 66 describes the results of encoding the gains at the gain change points in gloc_id[k] and gval_diff_id[k] of the gain code string of
When encoding the gains at the gain change points, an entropy encoding circuit of the gain encoding circuit 66 encodes the gain values while switching the entropy code book table such as the encoding table of
As described above, encoding is performed based on the 0-order prediction differential mode or the first-order prediction differential mode, and therefore the differential in a time frame of a gain sequence is obtained and gains are encoded.
If 1 is set as hld_mode in Step S46 or if encoding is performed in Step S50, in Step S51, the gain encoding circuit 66 determines if all the gain sequences are encoded or not. For example, if all the gain sequences-to-be-processed are processed, it is determined that all the gain sequences are encoded.
If it is determined that not all the gain sequences are encoded in Step S51, the process returns to Step S45, and the above-mentioned process is repeated. In other words, an unprocessed gain sequence is to be encoded as the gain sequence to be processed next.
To the contrary, if it is determined that all the gain sequences are encoded in Step S51, it means that a gain code string is obtained. So the gain encoding circuit 66 supplies the generated gain encoding mode header and gain code string to the multiplexing circuit 68. Note that if a gain encoding mode header is not generated, only a gain code string is output.
After the gain encoding mode header and the gain code string are output as described above, the gain encoding process is finished, and after that, the process proceeds to Step S17 of
As described above, the encoding device 51 obtains the differential between gain sequences, the differential between time frames of a gain sequence, or the differential in a time frame of a gain sequence, encodes gains, and generates a gain code string. As described above, by obtaining the differential between gain sequences, the differential between time frames of a gain sequence, or the differential in a time frame of a gain sequence, and by encodes gains, it is possible to encode the first gain and the second gain more efficiently. In other words, it is possible to reduce a larger quantity of codes obtained as the result of encoding.
<Example of Configuration of Decoding Device>
Next, the decoding device, in which an output code string output from the encoding device 51 is input as an input code string, that decodes the input code string will be described.
The decoding device 91 of
The demultiplexing circuit 101 demultiplexes a supplied input code string, i.e., an output code string received from the encoding device 51. The demultiplexing circuit 101 supplies the gain encoding mode header and the gain code string, which are obtained by demultiplexing the input code string, to the gain decoding circuit 103, and in addition, supplies the signal code string and the downmix information to the signal decoding circuit 102. Note that, if the input code string contains no gain encoding mode header, no gain encoding mode header is supplied to the gain decoding circuit 103.
The signal decoding circuit 102 decodes and downmixes the signal code string supplied from the demultiplexing circuit 101 based on the downmix information supplied from the demultiplexing circuit 101 and based on downmix control information supplied from an upper control apparatus, and supplies the obtained time-series signal to the gain application circuit 104. Here, the time-series signal is, for example, a sound signal of 11.1 ch or 7.1 ch, and a sound signal of each channel of the time-series signal is a PCM signal.
The gain decoding circuit 103 decodes the gain encoding mode header and the gain code string supplied from the demultiplexing circuit 101, and supplies the gain information to the gain application circuit 104, the gain information being determined based on the downmix control information and the DRC control information supplied from an upper control apparatus out of the gain information obtained as the result thereof. Here, the gain information output from the gain decoding circuit 103 is information corresponding to the above-mentioned first gain or second gain.
The gain application circuit 104 adjusts the gains of the time-series signal supplied from the signal decoding circuit 102 based on the gain information supplied from the gain decoding circuit 103, and outputs the obtained output-time-series signal.
<Description of Decoding Process>
Next, behaviors of the decoding device 91 will be described.
When an input code string of 1 time frame is supplied to the decoding device 91, the decoding device 91 decodes the input code string and outputs an output-time-series signal, i.e., performs the decoding process. Hereinafter, with reference to the flowchart of
In Step S81, the demultiplexing circuit 101 demultiplexes an input code string, supplies the gain encoding mode header and the gain code string obtained as the result thereof to the gain decoding circuit 103, and in addition, supplies the signal code string and the downmix information to the signal decoding circuit 102.
In Step S82, the signal decoding circuit 102 decodes the signal code string supplied from the demultiplexing circuit 101.
For example, the signal decoding circuit 102 decodes and inverse quantizes the signal code string, and obtains MDCT coefficients of the channels. Further, based on downmix control information supplied from an upper control apparatus, the signal decoding circuit 102 multiplies MDCT coefficients of the channels by a gain factor obtained based on the downmix information supplied from the demultiplexing circuit 101, and the results are added, whereby a gain-applied MDCT coefficient of each downmixed channel is calculated.
Further, the signal decoding circuit 102 performs the inverse MDCT process to the gain-applied MDCT coefficient of each channel, performs windowing and overlap-adding processes to the obtained inverse MDCT signal, and thereby generates a time-series signal containing a signal of each downmixed channel. Note that the downmixing process may be performed for the MDCT domain or the time domain.
The signal decoding circuit 102 supplies the thus obtained time-series signal to the gain application circuit 104.
In Step S83, the gain decoding circuit 103 performs the gain decoding process, i.e., decodes the gain encoding mode header and the gain code string supplied from the demultiplexing circuit 101, and supplies the gain information to the gain application circuit 104. Note that the gain decoding process will be described later in detail.
In Step S84, the gain application circuit 104 adjusts the gains of the time-series signal supplied from the signal decoding circuit 102 based on the gain information supplied from the gain decoding circuit 103, and outputs the obtained output-time-series signal.
When the output-time-series signal is output, the decoding process is finished.
As described above, the decoding device 91 decodes the gain encoding mode header and the gain code string, applies the obtained gain information to a time-series signal, and adjusts the gain for time domain.
The gain code string is obtained by encoding gains by obtaining the differential between gain sequences, the differential between time frames of a gain sequence, or the differential in a time frame of a gain sequence. So the decoding device 91 can obtain more appropriate gain information by using a gain code string with a smaller quantity of codes. In other words, sound of an appropriate volume level can be obtained with a smaller quantity of codes.
<Description of Gain Decoding Process>
Subsequently, with reference to the flowchart of FIG. 21, the gain decoding process corresponding to the process of Step S83 of
In Step S121, the gain decoding circuit 103 determines if the input code string contains a gain encoding mode header or not. For example, if a gain encoding mode header is supplied from the demultiplexing circuit 101, then it is determined that the gain encoding mode header is contained.
If it is determined that a gain encoding mode header is contained in Step S121, in Step S122, the gain decoding circuit 103 decodes the gain encoding mode header supplied from the demultiplexing circuit 101. As a result, information of each gain sequence such as a gain encoding mode is obtained.
After the gain encoding mode header is decoded, then the process proceeds to Step S123.
Meanwhile, if it is determined that a gain encoding mode header is not contained in Step S121, then the process proceeds to Step S123.
After the gain encoding mode header is decoded in Step S122 or if it is determined that a gain encoding mode header is not contained in Step S121, in Step S123, the gain decoding circuit 103 decodes all the gain sequences. In other words, the gain decoding circuit 103 decodes the gain code string of
In Step S124, the gain decoding circuit 103 determines one gain sequence to be processed, and determines if the hld_mode value of the one gain sequence is 0 or not.
If it is determined that the hld_mode value is not 0 but 1 in Step S124, then the process proceeds to Step S125.
In Step S125, the gain decoding circuit 103 uses the gain waveform of the previous time frame as it is as the gain waveform of the current time frame.
After the gain waveform of the current time frame is obtained, then the process proceeds to Step S129.
To the contrary, if it is determined that the hld_mode value is 0 in Step S124, in Step S126, the gain decoding circuit 103 determines if cmode is larger than 1 or not, i.e., if the upper 1 bit of cmode is 1 or not.
If it is determined that cmode is larger than 1, i.e., that the upper 1 bit of cmode is 1 in Step S126, the gain value at the end of the previous time frame is treated as the gain value at the beginning of the current time frame, and the process proceeds to Step S128.
Here, the gain decoding circuit 103 holds the gain value at the end of the time frame as prev. When decoding a gain, the prev value is arbitrarily used as the gain value at the beginning of the current time frame, and the gain of the gain sequence is obtained.
To the contrary, if it is determined that cmode is equal to or smaller than 1, i.e., that the upper 1 bit of cmode is 0 in Step S126, the process of Step S127 is performed.
In other words, in Step S127, the gain decoding circuit 103 substitutes gval_abs_id0, which is obtained by decoding the gain code string, in the above-mentioned mathematical formula (5) to thereby calculate a gain value at the beginning of the current time frame, and updates the prev value. In other words, the gain value obtained by calculation of the mathematical formula (5) is treated as a new prev value. Note that, more specifically, if the processed gain sequence is a slave gain sequence, the prev value is the differential value between the processed gain sequence and the master gain sequence at the beginning of the current time frame.
After the prev value is updated in Step S127 or if it is determined that cmode is larger than 1 in Step S126, in Step S128, the gain decoding circuit 103 generates the gain waveform of the processed gain sequence.
Specifically, the gain decoding circuit 103 determines, with reference to cmode obtained by decoding the gain code string, the 0-order prediction differential mode or the first-order prediction differential mode. Further, the gain decoding circuit 103 obtains a gain of each sample location in the current time frame depending on the determined differential encoding mode by using the prev value and by using gloc_id[k] and gval_diff_id[k] at each gain change point obtained by decoding the gain code string, and treats the result as a gain waveform.
For example, if it is determined that the 0-order prediction differential mode is employed, the gain decoding circuit 103 adds the gain value (differential value) shown by gval_diff_id[0] to the prev value, and treats the obtained vale as the gain value at the sample location identified by on gloc_id[0]. At this time, at each location from the beginning of the time frame to the sample location identified by gloc_id[0], the gain value at each sample location is obtained from the prev value to the gain value at the sample location identified by gloc_id[0], where it is assumed that the gain values are changed linearly.
After this, in a similar way, based on the gain value of the previous gain change point and based on gloc_id[k] and gval_diff_id[k] of the focused gain change point, the gain value of the focused gain change point is obtained, and a gain waveform containing the gain values of the sample locations in a time frame is obtained.
Here, if the processed gain sequence is a slave gain sequence, the gain values (gain waveform) obtained as the result of the above-mentioned process are the differential values between the gain waveform of the processed gain sequence and the gain waveform of the master gain sequence.
In view of this, with reference to MASTER_FLAG and DIFF_SEQ_ID of
Then, if the processed gain sequence is a master gain sequence, the gain decoding circuit 103 treats the gain waveform obtained as the result of the above-mentioned process as the final gain information of the processed gain sequence.
Meanwhile, if the processed gain sequence is a slave gain sequence, the gain decoding circuit 103 adds the gain information (gain waveform) on the master gain sequence corresponding to the processed gain sequence to the gain waveform obtained as the result of the above-mentioned process, and treats the result as the final gain information of the processed gain sequence.
After the gain waveform (gain information) of the processed gain sequence is obtained as described above, then the process proceeds to Step S129.
After the gain waveform is generated in Step S128 or Step S125, then the process of Step S129 is performed.
In Step S129, the gain decoding circuit 103 holds the gain value at the end of the current time frame of the gain waveform of the processed gain sequence as the prev value of the next time frame. Note that, if the processed gain sequence is a slave gain sequence, the value at the end of the time frame of the gain waveform obtained based on the 0-order prediction differential mode or the first-order prediction differential mode prediction, i.e., at the end of the time frame of the time waveform of the differential between the gain waveform of the processed gain sequence and the gain waveform of the master gain sequence, is treated as the prev value.
In Step S130, the gain decoding circuit 103 determines if the gain waveforms of all the gain sequences are obtained or not. For example, if all the gain sequences shown by the gain encoding mode header are treated as the processed gain sequences and the gain waveforms (gain information) are obtained, it is determined that the gain waveforms of all the gain sequences are obtained.
If it is determined that the gain waveforms of not all the gain sequences are obtained in Step S130, the process returns to Step S124, and the above-mentioned process is repeated. In other words, the next gain sequence is processed, and a gain waveform (gain information) is obtained.
To the contrary, if it is determined that the gain waveforms of all the gain sequences are obtained in Step S130, the gain decoding process is finished, and thereafter the process proceeds to Step S84 of
Note that, in this case, the gain decoding circuit 103 supplies the gain information of the gain sequence to the gain application circuit 104 out of the gain sequences, the number of the downmixed channels being shown by the downmix control information and the gain being calculated based on the DRC property shown by the DRC control information. In other words, with reference to DMIX_CH_CFG_ID and DRC_MODE_ID of each gain sequence mode of
As described above, the decoding device 91 decodes the gain encoding mode header and the gain code string, and calculates the gain information of each gain sequence. In this way, by decoding the gain code string and obtaining the gain information, sound of an appropriate volume level can be obtained with a smaller quantity of codes.
By the way, as shown in
It is easy to calculate and obtain such gain waveforms, and therefore a calculation load applied to the decoding device 91 side is not so large. However, if it is required to reduce a calculation load in mobile terminals and the like, for example, the reproducibility of gain waveforms may be sacrificed to some extent to reduce the calculation volume.
According to the DRC attack/release time constant property, in general, a gain is decreased sharply and is returned slowly. Because of this, from a viewpoint of the encoding efficiency, in many cases, the 0-order prediction differential mode is frequently used, the number gpnum of gain change points in a time frame is as small as two or less, and the differential value between gains at the gain change points, i.e., gval_diff_id[k], is small.
For example, in the example of
At this time, the decoding device 91 adds the gain value at the beginning location PREV11, i.e., the prev value, to the differential value gval_diff[0] in decibel, and further adds the differential value gval_diff[1] to the result of addition. As a result, the gain value gval[1] at the gain change point G12 is obtained. Hereinafter, the thus obtained result of adding the gain value at the beginning location PREV11, the differential value gval_diff[0], and the differential value gval_diff[1] will sometimes be referred to as a gain addition value.
In this case, the space between the location gloc[0] at the gain change point G11 and the location gloc[1] at the gain change point G12 is linearly interpolated with linear values, the straight line is extended to the location of the Nth sample in the time frame J, which is the beginning of the time frame J+1, and the gain value of the Nth sample is obtained as the prev value of the next time frame J+1. If the inclination of the straight line connecting the gain change point G11 and the gain change point G12 is small, the gain addition value, which is obtained by adding the differential values up to the differential value gval_diff[1] as described above, may be treated as the prev value of the time frame J+1, which may not lead to a special problem.
Note that, the inclination of the straight line connecting the gain change point G11 and the gain change point G12 can be obtained easily by using the fact that the location gloc[k] of each gain change point is a power of 2. In other words, in the example of
If the inclination is smaller than a certain threshold, the gain addition value is treated as the prev value of the next time frame J+1. If the inclination is equal to or larger than the threshold, by using the method described in the above-mentioned first embodiment, a gain waveform is obtained and the gain value at the end of the time frame may be treated as the prev value.
Further, if the first-order prediction differential mode is used, a gain waveform is obtained directly by using the method described in the first embodiment, and the value at the end of the time frame may be treated as the prev value.
By employing such a method, it is possible to reduce the calculation load of the decoding device 91.
<Example of Configuration of Encoding Device>
Note that, in the above, the encoding device 51 actually performs downmixing, and calculates the sound pressure level of the obtained downmix signal as a second sound pressure level. Alternatively, without performing downmixing, a downmixed sound pressure level may be obtained directly based on the sound pressure level of each channel. In this case, the sound pressure level is varied to some extent depending on the correlation of the channels of an input time-series signal, but the calculation amount can be reduced.
In this way, if a downmixed sound pressure level is obtained directly without performing downmixing, an encoding device is configured as shown in
The encoding device 131 of
The first sound pressure level calculation circuit 61 calculates, based on an input time-series signal, the sound pressure levels of the channels of the input time-series signal, supplies the sound pressure levels to the second sound pressure level estimating circuit 141, and supplies, to the first gain calculation circuit 62, the representative values of the sound pressure levels of the channels as first sound pressure levels.
Further, based on the sound pressure levels of the channels supplied from the first sound pressure level calculation circuit 61, the second sound pressure level estimating circuit 141 calculates estimated second sound pressure levels, and supplies the second sound pressure levels to the second gain calculation circuit 65.
<Description of Encoding Process>
Subsequently, behaviors of the encoding device 131 will be described. Hereinafter, with reference to the flowchart of
Note that the processes of Step S161 and Step S162 are the same as the processes of Step S11 and Step S12 of
In Step S163, the second sound pressure level estimating circuit 141 calculates a second sound pressure level based on the sound pressure level of each channel supplied from the first sound pressure level calculation circuit 61, and supplies the second sound pressure level to the second gain calculation circuit 65. For example, the second sound pressure level estimating circuit 141 obtains a weighted sum (linear coupling) of the sound pressure levels of the respective channels by using a prepared coefficient, whereby one second sound pressure level is calculated.
After the second sound pressure level is obtained, then, the processes of Step S164 to Step S167 are performed and the encoding process is finished. The processes are similar to the processes of Step S15 to Step S18 of
As described above, the encoding device 131 calculates a second sound pressure level based on the sound pressure levels of the channels of an input time-series signal, arbitrarily obtains a second gain based on the second sound pressure level, arbitrarily obtains the differential with a first gain, and encodes the differential. As a result, sound of an appropriate volume level can be obtained with a smaller quantity of codes, and in addition, encode can be performed with a smaller calculation amount.
<Example of Configuration of Encoding Device>
Further, in the above, an example in which the DRC process is performed in the time domain has been described. Alternatively, the DRC process may be performed in the MDCT domain. In this case, an encoding device is configured as shown in
The encoding device 171 of
The window length selecting/windowing circuit 181 selects a window length, in addition, performs windowing process to the supplied input time-series signal by using the selected window length, and supplies a time frame signal obtained as the result thereof to the MDCT circuit 182.
The MDCT circuit 182 performs MDCT process to the time frame signal supplied from the window length selecting/windowing circuit 181, and supplies the MDCT coefficient obtained as the result thereof to the first sound pressure level calculation circuit 183, the downmixing circuit 185, and the adaptation bit assigning circuit 190.
The first sound pressure level calculation circuit 183 calculates the first sound pressure level of the input time-series signal based on the MDCT coefficient supplied from the MDCT circuit 182, and supplies the first sound pressure level to the first gain calculation circuit 184. The first gain calculation circuit 184 calculates the first gain based on the first sound pressure level supplied from the first sound pressure level calculation circuit 183, and supplies the first gain to the gain encoding circuit 189.
The downmixing circuit 185 calculates the MDCT coefficient of each channel after downmixing based on downmix information supplied from an upper control apparatus and based on the MDCT coefficient of each channel of the input time-series signal supplied from the MDCT circuit 182, and supplies the MDCT coefficient to the second sound pressure level calculation circuit 186.
The second sound pressure level calculation circuit 186 calculates the second sound pressure level based on the MDCT coefficient supplied from the downmixing circuit 185, and supplies the second sound pressure level to the second gain calculation circuit 187. The second gain calculation circuit 187 calculates the second gain based on the second sound pressure level supplied from the second sound pressure level calculation circuit 186, and supplies the second gain to the gain encoding circuit 189.
The gain encoding circuit 189 encodes the first gain supplied from the first gain calculation circuit 184 and the second gain supplied from the second gain calculation circuit 187, and supplies the gain code string obtained as the result thereof to the multiplexing circuit 192.
The adaptation bit assigning circuit 190 generates bit assignment information showing the quantity of codes, which is the target when encoding the MDCT coefficient, based on the MDCT coefficient supplied from the MDCT circuit 182, and supplies the MDCT coefficient and the bit assignment information to the quantizing/encoding circuit 191.
The quantizing/encoding circuit 191 quantizes and encodes the MDCT coefficient from the adaptation bit assigning circuit 190 based on the bit assignment information supplied from the adaptation bit assigning circuit 190, and supplies the signal code string obtained as the result thereof to the multiplexing circuit 192. The multiplexing circuit 192 multiplexes the gain code string supplied from the gain encoding circuit 189, the downmix information supplied from the upper control apparatus, and the signal code string supplied from the quantizing/encoding circuit 191, and outputs the output code string obtained as the result thereof.
<Description of Encoding Process>
Next, behaviors of the encoding device 171 will be described. Hereinafter, with reference to the flowchart of
In Step S191, the window length selecting/windowing circuit 181 selects a window length, in addition, performs windowing process to the supplied input time-series signal by using the selected window length, and supplies a time frame signal obtained as the result thereof to the MDCT circuit 182. As a result, the signal of each channel of the input time-series signal is divided into time frame signals, i.e., signals of time frame units.
In Step S192, the MDCT circuit 182 performs MDCT process to the time frame signal supplied from the window length selecting/windowing circuit 181, and supplies the MDCT coefficient obtained as the result thereof to the first sound pressure level calculation circuit 183, the downmixing circuit 185, and the adaptation bit assigning circuit 190.
In Step S193, the first sound pressure level calculation circuit 183 calculates the first sound pressure level of the input time-series signal based on the MDCT coefficient supplied from the MDCT circuit 182, and supplies the first sound pressure level to the first gain calculation circuit 184. Here, the first sound pressure level calculated by the first sound pressure level calculation circuit 183 is the same as that calculated by the first sound pressure level calculation circuit 61 of
In Step S194, the first gain calculation circuit 184 calculates the first gain based on the first sound pressure level supplied from the first sound pressure level calculation circuit 183, and supplies the first gain to the gain encoding circuit 189. For example, the first gain is calculated based on the DRC properties of
In Step S195, the downmixing circuit 185 downmixes based on downmix information supplied from an upper control apparatus and based on the MDCT coefficient of each channel of the input time-series signal supplied from the MDCT circuit 182, calculates the MDCT coefficient of each channel after downmixing, and supplies the MDCT coefficient to the second sound pressure level calculation circuit 186.
For example, MDCT coefficients of the channels are multiplied by a gain factor obtained based on the downmix information, and the MDCT coefficients, which are multiplied by the gain factor, are added, whereby an MDCT coefficient of a downmixed channel is calculated.
In Step S196, the second sound pressure level calculation circuit 186 calculates the second sound pressure level based on the MDCT coefficient supplied from the downmixing circuit 185, and supplies the second sound pressure level to the second gain calculation circuit 187. Note that the second sound pressure level is calculated similar to the calculation of obtaining the first sound pressure level.
In Step S197, the second gain calculation circuit 187 calculates the second gain based on the second sound pressure level supplied from the second sound pressure level calculation circuit 186, and supplies the second gain to the gain encoding circuit 189. For example, the second gain is calculated based on the DRC properties of
In Step S198, the gain encoding circuit 189 performs the gain encoding process to thereby encode the first gain supplied from the first gain calculation circuit 184 and the second gain supplied from the second gain calculation circuit 187. Further, the gain encoding circuit 189 supplies the gain encoding mode header and the gain code string obtained as the result of the gain encoding process to the multiplexing circuit 192.
Note that the gain encoding process will be described later in detail. In the gain encoding process, with respect to gain sequences such as the first gain and the second gain, the differential between time frames is obtained and each gain is encoded. Further, a gain encoding mode header is generated only when necessary.
In Step S199, the adaptation bit assigning circuit 190 generates bit assignment information based on the MDCT coefficient supplied from the MDCT circuit 182, and supplies the MDCT coefficient and the bit assignment information to the quantizing/encoding circuit 191.
In Step S200, the quantizing/encoding circuit 191 quantizes and encodes the MDCT coefficient from the adaptation bit assigning circuit 190 based on the bit assignment information supplied from the adaptation bit assigning circuit 190, and supplies the signal code string obtained as the result thereof to the multiplexing circuit 192.
In Step S201, the multiplexing circuit 192 multiplexes the gain encoding mode header and the gain code string supplied from the gain encoding circuit 189, the downmix information supplied from the upper control apparatus, and the signal code string supplied from the quantizing/encoding circuit 191, and outputs the output code string obtained as the result thereof. As a result, for example, the output code string of
In this manner, the output code string of 1 time frame is output as a bitstream, and then the encoding process is finished. Then the encoding process of the next time frame is performed.
As described above, the encoding device 1711 calculates the first gain and the second gain in the MDCT domain, i.e., based on the MDCT coefficient, and obtains and encodes the differential between those gains. As a result, sound of an appropriate volume level can be obtained with a smaller quantity of codes.
<Description of Gain Encoding Process>
Next, with reference to the flowchart of
In Step S235, the gain encoding circuit 189 selects one gain sequence as a processed gain sequence, and obtains the differential value between the gain (gain waveform) of the current time frame of the gain sequence and the gain of the previous time frame.
Specifically, the differential between the gain value at each sample location of the current time frame of the processed gain sequence and the gain value at each sample location of the previous time frame previous to the current time frame of the processed gain sequence is obtained. In other words, the differential between the time frame of a gain sequence is obtained.
Note that, if the processed gain sequence is a slave gain sequence, the differential value between the time frames of the time waveform, which shows the differential between the slave gain sequence and the master gain sequence obtained in Step S234, is obtained. In other words, the differential value between the time waveform, which shows the differential between the slave gain sequence and the master gain sequence of the current time frame, and the time waveform, which shows the differential between the slave gain sequence and the master gain sequence of the previous time frame, is obtained.
In Step S236, the gain encoding circuit 189 determines if all the gain sequences are encoded or not. For example, if all the gain sequences-to-be-processed are processed, it is determined that all the gain sequences are encoded.
If it is determined that not all the gain sequences are encoded in Step S236, the process returns to Step S235, and the above-mentioned process is repeated. In other words, an unprocessed gain sequence is to be encoded as the gain sequence to be processed next.
To the contrary, if it is determined that all the gain sequences are encoded in Step S236, the gain encoding circuit 189 treats the differential value between the gain time frames of each gain sequence obtained in Step S235 as a gain code string. Further, the gain encoding circuit 189 supplies the generated gain encoding mode header and gain code string to the multiplexing circuit 129. Note that if a gain encoding mode header is not generated, only the gain code string is output.
As described above, when the gain encoding mode header and the gain code string are output, the gain encoding process is finished, and thereafter the process proceeds to Step S199 of
As described above, the encoding device 171 obtains the differential between gain sequences or the differential between time frames of a gain sequence to thereby encode gains, and generates a gain code string. As described above, by obtaining the differential between gain sequences or the differential between time frames of a gain sequence to thereby encode gains, a first gain and a second gain can be encoded more efficiently. In other words, it is possible to reduce a larger quantity of codes obtained as the result of encoding.
<Example of Configuration of Decoding Device>
Next, the decoding device, in which an output code string output from the encoding device 171 is input as an input code string, that decodes the input code string will be described.
The decoding device 231 of
The demultiplexing circuit 241 demultiplexes a supplied input code string. The demultiplexing circuit 241 supplies the gain encoding mode header and the gain code string, which are obtained by demultiplexing the input code string, to the gain decoding circuit 243, supplies the signal code string to the decoder/inverse quantizer circuit 242, and in addition, supplies the downmix information to the gain application circuit 244.
The decoder/inverse quantizer circuit 242 decodes and inverse quantizes the signal code string supplied from the demultiplexing circuit 241, and supplies the MDCT coefficient obtained as the result thereof to the gain application circuit 244.
The gain decoding circuit 243 decodes the gain encoding mode header and the gain code string supplied from the demultiplexing circuit 241, and supplies the gain information obtained as the result thereof to the gain application circuit 244.
Based on the downmix control information and the DRC control information supplied from an upper control apparatus, the gain application circuit 244 multiplies the MDCT coefficient supplied from the decoder/inverse quantizer circuit 242 by the gain factor obtained based on the downmix information supplied from the demultiplexing circuit 241 and the gain information supplied from the gain decoding circuit 243, and supplies the obtained gain-applied MDCT coefficient to the inverse MDCT circuit 245.
The inverse MDCT circuit 245 performs the inverse MDCT process to the gain-applied MDCT coefficient supplied from the gain application circuit 244, and supplies the obtained inverse MDCT signal to the windowing/OLA circuit 246. The windowing/OLA circuit 246 performs the windowing and overlap-adding process to the inverse MDCT signal supplied from the inverse MDCT circuit 245, and outputs the output-time-series signal obtained as the result thereof.
<Description of Decoding Process>
Subsequently, behaviors of the decoding device 231 will be described.
When an input code string of 1 time frame is supplied to the decoding device 231, the decoding device 231 decodes the input code string and outputs an output-time-series signal, i.e., performs the decoding process. Hereinafter, with reference to the flowchart of
In Step S261, the demultiplexing circuit 241 demultiplexes a supplied input code string. Further, the demultiplexing circuit 241 supplies the gain encoding mode header and the gain code string, which are obtained by demultiplexing the input code string, to the gain decoding circuit 243, supplies the signal code string to the decoder/inverse quantizer circuit 242, and in addition, supplies the downmix information to the gain application circuit 244.
In Step S262, the decoder/inverse quantizer circuit 242 decodes and inverse quantizes the signal code string supplied from the demultiplexing circuit 241, and supplies the MDCT coefficient obtained as the result thereof to the gain application circuit 244.
In Step S263, the gain decoding circuit 243 performs the gain decoding process to thereby decode the gain encoding mode header and the gain code string supplied from the demultiplexing circuit 241, and supplies the gain information obtained as the result thereof to the gain application circuit 244. Note that the gain decoding process will be described below in detail.
In Step S264, based on the downmix control information and the DRC control information from an upper control apparatus, the gain application circuit 244 multiplies the MDCT coefficient from the decoder/inverse quantizer circuit 242 by the gain factor obtained based on the downmix information from the demultiplexing circuit 241 and the gain information supplied from the gain decoding circuit 243 to thereby adjust the gain.
Specifically, depending on the downmix control information, the gain application circuit 244 multiplies the MDCT coefficient by the gain factor obtained based on the downmix information supplied from the demultiplexing circuit 241. Further, the gain application circuit 244 adds the MDCT coefficients, each of which is multiplied by the gain factor, to thereby calculate the MDCT coefficient of the downmixed channel.
Further, depending on the DRC control information, the gain application circuit 244 multiplies the MDCT coefficient of each downmixed channel by the gain information supplied from the gain decoding circuit 243 to thereby obtain a gain-applied MDCT coefficient.
The gain application circuit 244 supplies the thus obtained gain-applied MDCT coefficient to the inverse MDCT circuit 245.
In Step S265, The inverse MDCT circuit 245 performs the inverse MDCT process to the gain-applied MDCT coefficient supplied from the gain application circuit 244, and supplies the obtained inverse MDCT signal to the windowing/OLA circuit 246.
In Step S266, the windowing/OLA circuit 246 performs the windowing and overlap-adding process to the inverse MDCT signal supplied from the inverse MDCT circuit 245, and outputs the output-time-series signal obtained as the result thereof. When the output-time-series signal is output, the decoding process is finished.
As described above, the decoding device 231 decodes the gain encoding mode header and the gain code string, applies the obtained gain information to a MDCT coefficient, and adjusts the gain.
The gain code string is obtained by calculating a differential between gain sequences or a differential between time frames of a gain sequence. Because of this, the decoding device 231 can obtain more appropriate gain information from a gain code string with a smaller quantity of codes. In other words, sound of an appropriate volume level can be obtained with a smaller quantity of codes.
<Description of Gain Decoding Process>
Subsequently, with reference to the flowchart of
Note that the processes of Step S291 to Step S293 are similar to the processes of Step S121 to Step S123 of
In Step S294, the gain decoding circuit 243 determines one gain sequence to be processed, and obtains the gain value of the current time frame based on the differential value between the gain value of the previous time frame previous to the current time frame of the gain sequence and the gain of the current time frame.
In other words, with reference to MASTER_FLAG and DIFF_SEQ_ID of
Further, if the processed gain sequence is a master gain sequence, the gain decoding circuit 243 adds the gain value at each sample location of the previous time frame previous to the current time frame of the processed gain sequence and the differential value at the respective sample locations of the current time frame of the processed gain sequence obtained by decoding the gain code string. Further, the gain value at each sample location of the current time frame obtained as the result thereof is treated as a time waveform of the gain of the current time frame, i.e., the final gain information of the processed gain sequence.
Meanwhile, if the processed gain sequence is a slave gain sequence, the gain decoding circuit 243 obtains the differential value between the gains at the respective sample locations of the master gain sequence of the previous time frame previous to the current time frame of the processed gain sequence and the gains at the respective sample locations of the processed gain sequence of the previous time frame.
Further, the gain decoding circuit 243 adds the thus obtained differential value and the differential value at each sample location in the current time frame of the processed gain sequence obtained by decoding the gain code string. Further, the gain decoding circuit 243 adds the gain information (gain waveform) on the master gain sequence of the current time frame corresponding to the processed gain sequence to the gain waveform obtained as the result of the addition, and treats the result as the final gain information of the processed gain sequence.
In Step S295, the gain decoding circuit 243 determines if the gain waveforms of all the gain sequences are obtained or not. For example, if all the gain sequences shown in the gain encoding mode header are treated as the processed gain sequences and the gain waveforms (gain information) are obtained, it is determined that the gain waveforms of all the gain sequences are obtained.
In Step S295, if it is determined that the gain waveforms of not all the gain sequences are obtained, the process returns to Step S294, and the above-mentioned process is repeated. In other words, the next gain sequence is processed, and a gain waveform (gain information) is obtained.
To the contrary, if it is determined that the gain waveforms of all the gain sequences are obtained in Step S295, the gain decoding process is finished, and, after that, the process proceeds to Step S264 of
As described above, the decoding device 231 decodes the gain encoding mode header and the gain code string, and calculates the gain information of each gain sequence. In this way, by decoding the gain code string and obtaining the gain information, sound of an appropriate volume level can be obtained with a smaller quantity of codes.
As described above, according to the present technology, encoded sounds can be reproduced at an appropriate volume level under various reproducing environments including presence/absence of downmixing, and clipping noises are not generated under the various reproducing environments. Further, because the required quantity of codes is small, a large amount of gain information can be encoded efficiently. Further, according to the present technology, because the necessary calculation volume of the decoding device is small, the present technology is applicable to mobile terminals and the like.
Note that, according to the above description, to correct the volume level of an input time-series signal, a gain is corrected by means of DRC. Alternatively, to correct the volume level, another correction process by using loudness or the like may be performed. Specifically, according to MPEG AAC, as auxiliary information, the loudness value, which shows the sound pressure level of the entire content, can be described for each frame, and such a corrected loudness value is also encoded as a gain value.
In view of this, the gain of the loudness correction can be also encoded, contained in a gain code string, and sent. To correct loudness, similar to DRC, a gain value corresponding to downmix patterns is required.
Further, when encoding a first gain and a second gain, the differential between gain change points between time frames may be obtained and encoded.
By the way, the above-mentioned series of processes can be performed by using hardware or can be performed by using software. If performing the series of processes by using software, a program configuring the software is installed in a computer. Here, examples of a computer include a computer embedded in dedicated hardware, a general-purpose computer, for example, in which various programs are installed and which can perform various functions, and the like.
In the computer, the CPU (Central Processing Unit) 501, the ROM (Read Only Memory) 502, and the RAM (Random Access Memory) 503 are connected to each other via the bus 504.
Further, the input/output interface 505 is connected to the bus 504. To the input/output interface 505, the input unit 506, the output unit 507, the recording unit 508, the communication unit 509, and the drive 510 are connected.
The input unit 506 includes a keyboard, a mouse, a microphone, an image sensor, and the like. The output unit 507 includes a display, a speaker, and the like. The recording unit 508 includes a hard disk, a nonvolatile memory, and the like. The communication unit 509 includes a network interface and the like. The drive 510 drives the removal medium 511 such as a magnetic disk, an optical disk, a magnetooptical disk, a semiconductor memory, or the like.
In the thus configured computer, the CPU 501 loads programs recorded in the recording unit 508, for example, on the RAM 503 via the input/output interface 505 and the bus 504, and executes the programs, whereby the above-mentioned series of processes are performed.
The programs that the computer (the CPU 501) executes may be, for example, recorded in the removal medium 511, i.e., a package medium or the like, and provided. Further, the programs may be provided via a wired or wireless transmission medium such as a local area network, the Internet, or digital satellite broadcasting.
In the computer, the removal medium 511 is loaded on the drive 510, and thereby the programs can be installed in the recording unit 508 via the input/output interface 505. Further, the programs may be received by the communication unit 509 via a wired or wireless transmission medium, and installed in the recording unit 508. Alternatively, the programs may be preinstalled in the ROM 502 or the recording unit 508.
Note that, the programs that the computer executes may be programs to be processed in time-series in the order described in this specification, programs to be processed in parallel, or programs to be processed at necessary timing, e.g., when they are called.
Further, the embodiments of the present technology are not limited to the above-mentioned embodiments, and may be variously modified within the scope of the gist of the present technology.
For example, the present technology may employ the cloud computing configuration in which apparatuses share one function via a network and cooperatively process the function.
Further, the steps described above with reference to the flowchart may be performed by one apparatus, or may be shared and performed by a plurality of apparatuses.
Further, if one step includes a plurality of processes, the plurality of processes of the one step may be performed by one apparatus, or may be shared and performed by a plurality of apparatuses.
Further, the effects described in this specification are merely examples and not the limitations, and other effects may be attained.
Further, the present technology may employ the following configurations.
(1) An encoding device, including:
a gain calculator that calculates a first gain value and a second gain value for volume level correction of each frame of a sound signal; and
a gain encoder that obtains a first differential value between the first gain value and the second gain value, or obtains a second differential value between the first gain value and the first gain value of the adjacent frame or between the first differential value and the first differential value of the adjacent frame, and encodes information based on the first differential value or the second differential value.
(2) The encoding device according to (1), in which
the gain encoder obtains the first differential value between the first gain value and the second gain value at a plurality of locations in the frame, or obtains the second differential value between the first gain values at a plurality of locations in the frame or between the first differential values at a plurality of locations in the frame.
(3) The encoding device according to (1) or (2), in which
the gain encoder obtains the second differential value based on a gain change point, an inclination of the first gain value or the first differential value in the frame changing at the gain change point.
(4) The encoding device according to (3), in which
the gain encoder obtains a differential between the gain change point and another gain change point to thereby obtain the second differential value.
(5) The encoding device according to (3), in which
the gain encoder obtains a differential between the gain change point and a value predicted by first-order prediction based on another gain change point to thereby obtain the second differential value.
(6) The encoding device according to (3), in which
the gain encoder encodes the number of the gain change points in the frame and information based on the second differential value at the gain change points.
(7) The encoding device according to any one of (1) to (6), in which
the gain calculator calculates the second gain value for the each sound signal of the number of different channels obtained by downmixing.
(8) The encoding device according to any one of (1) to (7), in which
the gain encoder selects if the first differential value is to be obtained or not based on correlation between the first gain value and the second gain value.
(9) The encoding device according to any one of (1) to (8), in which
the gain encoder variable-length-encodes the first differential value or the second differential value.
(10) An encoding method, including the steps of:
calculating a first gain value and a second gain value for volume level correction of each frame of a sound signal; and
obtaining a first differential value between the first gain value and the second gain value, or obtaining a second differential value between the first gain value and the first gain value of the adjacent frame or between the first differential value and the first differential value of the adjacent frame, and encoding information based on the first differential value or the second differential value.
(11) A program, causing a computer to execute a process including the steps of:
calculating a first gain value and a second gain value for volume level correction of each frame of a sound signal; and
obtaining a first differential value between the first gain value and the second gain value, or obtaining a second differential value between the first gain value and the first gain value of the adjacent frame or between the first differential value and the first differential value of the adjacent frame, and encoding information based on the first differential value or the second differential value.
(12) A decoding device, including:
a demultiplexer that demultiplexes an input code string into a gain code string and a signal code string, the gain code string being generated by, with respect to a first gain value and a second gain value for volume level correction calculated for each frame of a sound signal, obtaining a first differential value between the first gain value and the second gain value, or obtaining a second differential value between the first gain value and the first gain value of the adjacent frame or between the first differential value and the first differential value of the adjacent frame, and encoding information based on the first differential value or the second differential value, the signal code string being obtained by encoding the sound signal;
a signal decoder that decodes the signal code string; and
a gain decoder that decodes the gain code string, and outputs the first gain value or the second gain value for the volume level correction.
(13) The decoding device according to (12), in which
the first differential value is encoded by obtaining a differential value between the first gain value and the second gain value at a plurality of locations in the frame, and
the second differential value is encoded by obtaining a differential value between the first gain values at a plurality of locations in the frame or between the first differential values at a plurality of locations in the frame.
(14) The decoding device according to (12) or (13), in which
the second differential value is obtained based on a gain change point, an inclination of the first gain value or the first differential value in the frame changing at the gain change point, whereby the second differential value is encoded.
(15) The decoding device according to (14), in which
the second differential value is obtained based on a differential between the gain change point and another gain change point, whereby the second differential value is encoded.
(16) The decoding device according to (14), in which
the second differential value is obtained based on a differential between the gain change point and a value predicted by first-order prediction based on another gain change point, whereby the second differential value is encoded.
(17) The decoding device according to any one of (14) to (16), in which
the number of the gain change points in the frame and information based on the second differential value at the gain change points are encoded as the second differential value.
(18) A decoding method, including the steps of:
demultiplexing an input code string into a gain code string and a signal code string, the gain code string being generated by, with respect to a first gain value and a second gain value for volume level correction calculated for each frame of a sound signal, obtaining a first differential value between the first gain value and the second gain value, or obtaining a second differential value between the first gain value and the first gain value of the adjacent frame or between the first differential value and the first differential value of the adjacent frame, and encoding information based on the first differential value or the second differential value, the signal code string being obtained by encoding the sound signal;
decoding the signal code string; and
decoding the gain code string, and outputting the first gain value or the second gain value for the volume level correction.
(19) A program, causing a computer to execute a process including the steps of:
demultiplexing an input code string into a gain code string and a signal code string, the gain code string being generated by, with respect to a first gain value and a second gain value for volume level correction calculated for each frame of a sound signal, obtaining a first differential value between the first gain value and the second gain value, or obtaining a second differential value between the first gain value and the first gain value of the adjacent frame or between the first differential value and the first differential value of the adjacent frame, and encoding information based on the first differential value or the second differential value, the signal code string being obtained by encoding the sound signal;
decoding the signal code string; and
decoding the gain code string, and outputting the first gain value or the second gain value for the volume level correction.
Number | Date | Country | Kind |
---|---|---|---|
2013-193787 | Sep 2013 | JP | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2014/073465 | 9/5/2014 | WO | 00 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2015/041070 | 3/26/2015 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
4628529 | Borth et al. | Dec 1986 | A |
6073100 | Goodridge, Jr. | Jun 2000 | A |
6415251 | Oikawa et al. | Jul 2002 | B1 |
6708145 | Liljeryd et al. | Mar 2004 | B1 |
6829360 | Iwata et al. | Dec 2004 | B1 |
6895375 | Malah et al. | May 2005 | B2 |
7003451 | Kjorling et al. | Feb 2006 | B2 |
7069212 | Tanaka et al. | Jun 2006 | B2 |
7139702 | Tsushima et al. | Nov 2006 | B2 |
7242710 | Ekstrand | Jul 2007 | B2 |
7246065 | Tanaka et al. | Jul 2007 | B2 |
7283955 | Liljeryd et al. | Oct 2007 | B2 |
7318035 | Andersen et al. | Jan 2008 | B2 |
7330812 | Ding | Feb 2008 | B2 |
7337118 | Davidson et al. | Feb 2008 | B2 |
7447631 | Truman et al. | Nov 2008 | B2 |
7899676 | Honma et al. | Mar 2011 | B2 |
7941315 | Matsuo | May 2011 | B2 |
7974847 | Kjoerling et al. | Jul 2011 | B2 |
7983424 | Kjorling et al. | Jul 2011 | B2 |
7991621 | Oh et al. | Aug 2011 | B2 |
8019614 | Takagi et al. | Sep 2011 | B2 |
8032387 | Davidson et al. | Oct 2011 | B2 |
8050933 | Davidson et al. | Nov 2011 | B2 |
8063809 | Liu et al. | Nov 2011 | B2 |
8078474 | Vos et al. | Dec 2011 | B2 |
8144804 | Chinen et al. | Mar 2012 | B2 |
8145475 | Kjoerling et al. | Mar 2012 | B2 |
8244524 | Shirakawa et al. | Aug 2012 | B2 |
8260609 | Rajendran et al. | Sep 2012 | B2 |
8321229 | Choo et al. | Nov 2012 | B2 |
8332210 | Nilsson et al. | Dec 2012 | B2 |
8340213 | Chinen et al. | Dec 2012 | B2 |
8346566 | Kjoerling et al. | Jan 2013 | B2 |
8352249 | Chong et al. | Jan 2013 | B2 |
8364474 | Honma et al. | Jan 2013 | B2 |
8370133 | Taleb et al. | Feb 2013 | B2 |
8386243 | Nilsson et al. | Feb 2013 | B2 |
8407046 | Gao | Mar 2013 | B2 |
8423371 | Yamanashi et al. | Apr 2013 | B2 |
8433582 | Ramabadran et al. | Apr 2013 | B2 |
8463599 | Ramabadran et al. | Jun 2013 | B2 |
8463602 | Oshikiri | Jun 2013 | B2 |
8484036 | Vos | Jul 2013 | B2 |
8498344 | Wilson et al. | Jul 2013 | B2 |
8527283 | Jasiuk et al. | Sep 2013 | B2 |
8560330 | Gao | Oct 2013 | B2 |
8688441 | Ramabadran et al. | Apr 2014 | B2 |
8793126 | Gao | Jul 2014 | B2 |
8818541 | Villemoes et al. | Aug 2014 | B2 |
8949119 | Yamamoto et al. | Feb 2015 | B2 |
8972248 | Otani et al. | Mar 2015 | B2 |
9047875 | Gao | Jun 2015 | B2 |
9177563 | Yamamoto et al. | Nov 2015 | B2 |
9208795 | Yamamoto et al. | Dec 2015 | B2 |
9294062 | Hatanaka et al. | Mar 2016 | B2 |
9361900 | Yamamoto et al. | Jun 2016 | B2 |
9390717 | Yamamoto et al. | Jul 2016 | B2 |
9406306 | Yamamoto et al. | Aug 2016 | B2 |
9406312 | Yamamoto et al. | Aug 2016 | B2 |
9437197 | Honma et al. | Sep 2016 | B2 |
9437198 | Hatanaka et al. | Sep 2016 | B2 |
9536542 | Yamamoto et al. | Jan 2017 | B2 |
9542952 | Hatanaka et al. | Jan 2017 | B2 |
9583112 | Yamamoto et al. | Feb 2017 | B2 |
9659573 | Yamamoto et al. | May 2017 | B2 |
9679580 | Yamamoto et al. | Jun 2017 | B2 |
20020128835 | Iso | Sep 2002 | A1 |
20030033142 | Murashima | Feb 2003 | A1 |
20030093271 | Tsushima et al. | May 2003 | A1 |
20030093278 | Malah | May 2003 | A1 |
20030187663 | Truman et al. | Oct 2003 | A1 |
20030233234 | Truman et al. | Dec 2003 | A1 |
20040028244 | Tsushima et al. | Feb 2004 | A1 |
20050004793 | Ojala et al. | Jan 2005 | A1 |
20050060146 | Oh | Mar 2005 | A1 |
20050096917 | Kjorling et al. | May 2005 | A1 |
20050143985 | Sung et al. | Jun 2005 | A1 |
20050267763 | Ojanpera | Dec 2005 | A1 |
20060031075 | Oh et al. | Feb 2006 | A1 |
20060106620 | Thompson et al. | May 2006 | A1 |
20060136199 | Nongpiur et al. | Jun 2006 | A1 |
20060251178 | Oshikiri | Nov 2006 | A1 |
20060271356 | Vos | Nov 2006 | A1 |
20070005351 | Sathyendra et al. | Jan 2007 | A1 |
20070040709 | Sung et al. | Feb 2007 | A1 |
20070071116 | Oshikiri | Mar 2007 | A1 |
20070088541 | Vos et al. | Apr 2007 | A1 |
20070150267 | Honma et al. | Jun 2007 | A1 |
20070165869 | Ojanpera | Jul 2007 | A1 |
20070174063 | Mehrotra et al. | Jul 2007 | A1 |
20070219785 | Gao | Sep 2007 | A1 |
20070282599 | Choo et al. | Dec 2007 | A1 |
20070299656 | Son et al. | Dec 2007 | A1 |
20080027733 | Oshikiri et al. | Jan 2008 | A1 |
20080056511 | Zhang et al. | Mar 2008 | A1 |
20080097751 | Tsuchinaga et al. | Apr 2008 | A1 |
20080120118 | Choo et al. | May 2008 | A1 |
20080129350 | Mitsufuji et al. | Jun 2008 | A1 |
20080140425 | Shimada | Jun 2008 | A1 |
20080253587 | Une | Oct 2008 | A1 |
20080262835 | Oshikiri | Oct 2008 | A1 |
20080263285 | Sharma et al. | Oct 2008 | A1 |
20080270125 | Choo et al. | Oct 2008 | A1 |
20090048846 | Smaragdis et al. | Feb 2009 | A1 |
20090132238 | Sudhakar | May 2009 | A1 |
20090157413 | Oshikiri | Jun 2009 | A1 |
20090192792 | Lee et al. | Jul 2009 | A1 |
20090228284 | Moon | Sep 2009 | A1 |
20090234657 | Takagi et al. | Sep 2009 | A1 |
20090248407 | Oshikiri | Oct 2009 | A1 |
20090265167 | Ehara et al. | Oct 2009 | A1 |
20090271204 | Tammi | Oct 2009 | A1 |
20090281811 | Oshikiri et al. | Nov 2009 | A1 |
20100017198 | Yamanashi et al. | Jan 2010 | A1 |
20100063802 | Gao | Mar 2010 | A1 |
20100063812 | Gao | Mar 2010 | A1 |
20100083344 | Schildbach et al. | Apr 2010 | A1 |
20100106494 | Honma | Apr 2010 | A1 |
20100106509 | Shimada | Apr 2010 | A1 |
20100161323 | Oshikiri | Jun 2010 | A1 |
20100198587 | Ramabadran et al. | Aug 2010 | A1 |
20100198588 | Sudo et al. | Aug 2010 | A1 |
20100217607 | Neuendorf et al. | Aug 2010 | A1 |
20100222907 | Hashimoto | Sep 2010 | A1 |
20100226498 | Kino et al. | Sep 2010 | A1 |
20100228557 | Chen et al. | Sep 2010 | A1 |
20100241437 | Taleb et al. | Sep 2010 | A1 |
20100280833 | Yamanashi et al. | Nov 2010 | A1 |
20100286990 | Biswas et al. | Nov 2010 | A1 |
20100305956 | Oh et al. | Dec 2010 | A1 |
20100318350 | Endo et al. | Dec 2010 | A1 |
20110046965 | Taleb et al. | Feb 2011 | A1 |
20110054911 | Baumgarte et al. | Mar 2011 | A1 |
20110075855 | Oh et al. | Mar 2011 | A1 |
20110106529 | Disch | May 2011 | A1 |
20110112845 | Jasiuk et al. | May 2011 | A1 |
20110137643 | Yamanashi et al. | Jun 2011 | A1 |
20110137650 | Ljolje | Jun 2011 | A1 |
20110137659 | Honma et al. | Jun 2011 | A1 |
20110153318 | Rossello et al. | Jun 2011 | A1 |
20110170711 | Rettelbach et al. | Jul 2011 | A1 |
20110173006 | Nagel et al. | Jul 2011 | A1 |
20110178807 | Yang et al. | Jul 2011 | A1 |
20110222630 | Suzuki et al. | Sep 2011 | A1 |
20110264454 | Ullberg et al. | Oct 2011 | A1 |
20110282675 | Nagel et al. | Nov 2011 | A1 |
20110305352 | Villemoes et al. | Dec 2011 | A1 |
20120010880 | Nagel et al. | Jan 2012 | A1 |
20120016667 | Gao | Jan 2012 | A1 |
20120016668 | Gao | Jan 2012 | A1 |
20120057711 | Makino et al. | Mar 2012 | A1 |
20120243526 | Yamamoto et al. | Sep 2012 | A1 |
20120310654 | Riedmiller et al. | Dec 2012 | A1 |
20120328124 | Kjoerling | Dec 2012 | A1 |
20130028427 | Yamamoto et al. | Jan 2013 | A1 |
20130030818 | Yamamoto et al. | Jan 2013 | A1 |
20130124214 | Yamamoto et al. | May 2013 | A1 |
20130202118 | Yamamoto et al. | Aug 2013 | A1 |
20130208902 | Yamamoto et al. | Aug 2013 | A1 |
20130218577 | Taleb et al. | Aug 2013 | A1 |
20130226598 | Laaksonen et al. | Aug 2013 | A1 |
20130275142 | Hatanaka et al. | Oct 2013 | A1 |
20140006037 | Honma et al. | Jan 2014 | A1 |
20140156289 | Hatanaka et al. | Jun 2014 | A1 |
20140172433 | Honma et al. | Jun 2014 | A2 |
20140180682 | Shi et al. | Jun 2014 | A1 |
20140200899 | Yamamoto et al. | Jul 2014 | A1 |
20140200900 | Yamamoto et al. | Jul 2014 | A1 |
20140205101 | Yamamoto et al. | Jul 2014 | A1 |
20140205111 | Hatanaka et al. | Jul 2014 | A1 |
20140211948 | Hatanaka et al. | Jul 2014 | A1 |
20140214432 | Hatanaka et al. | Jul 2014 | A1 |
20140214433 | Hatanaka et al. | Jul 2014 | A1 |
20140226822 | Endegard et al. | Aug 2014 | A1 |
20150051904 | Kikuiri et al. | Feb 2015 | A1 |
20150088528 | Toguri et al. | Mar 2015 | A1 |
20150120307 | Yamamoto et al. | Apr 2015 | A1 |
20150243295 | Truman et al. | Aug 2015 | A1 |
20160012829 | Yamamoto et al. | Jan 2016 | A1 |
20160019911 | Yamamoto et al. | Jan 2016 | A1 |
20160140982 | Yamamoto et al. | May 2016 | A1 |
20160322057 | Yamamoto et al. | Nov 2016 | A1 |
20160343380 | Hatanaka et al. | Nov 2016 | A1 |
20170076737 | Yamamoto et al. | Mar 2017 | A1 |
20170148452 | Hatanaka et al. | May 2017 | A1 |
Number | Date | Country |
---|---|---|
2775387 | Apr 2011 | CA |
1992533 | Apr 2007 | CN |
1328707 | Jul 2007 | CN |
101083076 | Dec 2007 | CN |
101178898 | May 2008 | CN |
101183527 | May 2008 | CN |
101548318 | Sep 2009 | CN |
101853663 | Jun 2010 | CN |
101896968 | Nov 2010 | CN |
1921610 | May 2008 | EP |
2019391 | Jan 2009 | EP |
2317509 | May 2011 | EP |
2472512 | Jul 2012 | EP |
3-254223 | Nov 1991 | JP |
08-008933 | Jan 1996 | JP |
08-030295 | Feb 1996 | JP |
08-123484 | May 1996 | JP |
10-020888 | Jan 1998 | JP |
2001-134287 | May 2001 | JP |
2001-521648 | Nov 2001 | JP |
2002-536679 | Oct 2002 | JP |
2002-373000 | Dec 2002 | JP |
2003-514267 | Apr 2003 | JP |
2003-216190 | Jul 2003 | JP |
2003-255973 | Sep 2003 | JP |
2003-316394 | Nov 2003 | JP |
2004-101720 | Apr 2004 | JP |
2004-258603 | Sep 2004 | JP |
2005-520219 | Jul 2005 | JP |
2005-521907 | Jul 2005 | JP |
2006-048043 | Feb 2006 | JP |
2007-017908 | Jan 2007 | JP |
2007-171821 | Jul 2007 | JP |
2007-316254 | Dec 2007 | JP |
2007-333785 | Dec 2007 | JP |
2008-107415 | May 2008 | JP |
2008-139844 | Jun 2008 | JP |
2008-158496 | Jul 2008 | JP |
2008-224902 | Sep 2008 | JP |
2008-261978 | Oct 2008 | JP |
2009-116275 | May 2009 | JP |
2009-116371 | May 2009 | JP |
2009-134260 | Jun 2009 | JP |
2010-020251 | Jan 2010 | JP |
2010-079275 | Apr 2010 | JP |
2010-526331 | Jul 2010 | JP |
2010-212760 | Sep 2010 | JP |
2012-504260 | Feb 2012 | JP |
2013-015633 | Jan 2013 | JP |
10-2006-0060928 | Jun 2006 | KR |
10-2007-0083997 | Aug 2007 | KR |
10-2007-0118174 | Dec 2007 | KR |
WO 2004010415 | Jan 2004 | WO |
WO 2004027368 | Apr 2004 | WO |
WO 2005111568 | Nov 2005 | WO |
WO 2006049205 | May 2006 | WO |
WO 2006075563 | Jul 2006 | WO |
WO 2007037361 | Apr 2007 | WO |
WO 2007052088 | May 2007 | WO |
WO 2007126015 | Nov 2007 | WO |
WO 2007129728 | Nov 2007 | WO |
WO 2007142434 | Dec 2007 | WO |
WO 2009001874 | Dec 2008 | WO |
WO 2009004727 | Jan 2009 | WO |
WO 2009029037 | Mar 2009 | WO |
WO 2009054393 | Apr 2009 | WO |
WO 2009059631 | May 2009 | WO |
WO 2009093466 | Jul 2009 | WO |
WO 2010024371 | Mar 2010 | WO |
WO 2011043227 | Apr 2011 | WO |
Entry |
---|
No Author Listed, Information Technology—Coding of audio-visual objects—Part 3: Audio, International Standard, ISO/IEC 14496-3:/Amd.1:1999(E), ISO/IEC JTC 1/SC 29/WG 11, 199 pages. |
Baumgarte, F., Enhanced Metadata for Dynamic Range Compression, MPEG Meeting, Apr. 2013, ISO/IEC JTC1/SC29/WG11 MPEG 2013, No. m28901, 10 pages. |
Chennoukh et al., Speech enhancement via frequency bandwidth extension using line spectral frequencies. IEEE International Conference on Acoustics, Speech and Signal Processing, 2001;1:665-6[68]. |
Chinen et al., Report on PVC CE for SBR in USAC, Motion Picture Expert Group Meeting, Oct. 28, 2010, ISO/IEC JTC1/SC29/WG11, No. M18399, 47 pages. |
Krishnan et al., EVRC-Wideband: The New 3GPP2 Wideband Vocoder Standard, Qualcomm Inc., IEEE International Conference on Acoustics, Speech, and Signal Processing, Apr. 15, 2007, pp. II-333-336. |
Liu et al., High frequency reconstruction for band-limited audio signals. Proc of the 6th Int'l Conference on Digital Audio Effects, DAX 1-6, London, UK. Sep. 8-11, 2003. |
No Author Listed, Information technology Coding of audio-visual objects Part 3: Audio, International Standard, ISO/IEC 14496-3:2001(E), Second Edition, Dec. 15, 2001, 110 pages. |
Number | Date | Country | |
---|---|---|---|
20160225376 A1 | Aug 2016 | US |